Lydia Ntourountou
November
“If a deep fake video version of me is more attractive, how can I bridge the gap between that and the real me?" Finalists of the 2024 Miss Korea beauty pageant were expected to answer this question. Despite the controversies surrounding pageants—whose sole purpose is for women competing to showcase their physical beauty—comparing a deep fake version of a woman to the way she actually looks had previously been completely unheard of. Amidst a period in which women and girls in South Korea have fallen victim to the technology using AI in order to create fake videos depicting mostly sexual content and acts, this has caused outcries and severe criticism. Deep fakes, or “images or recordings that have been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said” made using Artificial Intelligence, have been around since 2017. They were used mostly to insert celebrities' faces in pornographic videos. However, the recent surge in AI popularity has made it extremely accessible for anyone to insert a photo of a person to a bot, with the purpose of changing their face or body and creating a fake version of the person who could be nude or performing a sexual act.
Sexualization of women is not a recent phenomenon; it dates back to centuries of gender inequality and the stereotypes associated with them. Women are molded to fit into specific categories and their physical beauty is constantly scrutinized, especially on social media where objectifying a person becomes simple, hiding behind the protection of anonymity: nowadays, one only needs to scroll through the comment section of a woman’s post to find degrading comments regarding her appearance. Unfortunately, hyper-sexualization does not stop there. It has managed to become even more complex and disturbing with the creation of deep fakes. The main problem with deep fakes is not necessarily the fact that they depict explicit content, but rather that the people shown in these videos or images are mostly women and young girls who have not consented for their photos to be used for such purposes. This explicit non-consensual deep fake content (“non consensual intimate image abuse”) is becoming increasingly concerning, especially given how accessible it is. One can simply find an online bot—there are more than 50 bots whose purpose is to create deepfake content after all—and provide the image of their preference, whether that be a friend, a celebrity or a child, asking it to create this type of illicit content.
And what better platform to host these types of bots than Telegram? It could be argued that this application constitutes the ideal platform for these videos and images to circulate, because of the minimal restrictions that it imposes on its users, allowing them to connect and share illegal sexual and pornographic content developed by the bots. The purpose of these bots is not cryptic, but rather hidden in plain sight: “I can do anything you want about the face or clothes of the photo you give me” is what a creator of one bot wrote.
While the “deepfake crisis” is worldwide and impacts thousands of women, women and girls in South Korea have been some of its main targets. Research conducted by the newspaper Hankyoreh has uncovered horrific results about the reality that Korean women and children experience due to the use of their images for deepfakes; they are stripped of their fundamental rights and experience a traumatic shock that could stigmatize them for the rest of their life. Women are rendered objects, classified according to the university they are attending and their major, tainted by images of themselves distributed in Telegram chat rooms called “friends of friends.” These are later divided into sub-groups where their members chat about these women—usually their classmates—and share their images to produce pornographic deepfakes. These chat rooms are not limited to university students, but go all the way to high school, exploiting minors and young girls—who are already sexualized in the media—but this time actually using their photos to create such “content.” It is needless to mention the horror that women and children victims of such acts might experience. They are not only forced to deal with deepfake videos of themselves, but also rarely find justice.
The protection of South Korean women by the legal system has heavily been criticized, due to the lack of efficient legislation concerning crimes such as illegal filming and digital crimes. The current crisis called for a ban of creation, distribution and consumption of deepfakes, which seems to be an important step towards change, but its concrete enforcement is rather unrealistic. The convenience of using Telegram is not limited to the lack of restrictions concerning the production of such content, but extends to the challenge for authorities to investigate and prosecute criminals using the application for illegal purposes. The application is infamous for its lack of communication and cooperation with authorities, making it difficult to identify suspects and render justice to the victims. Its founder, Pavel Durov, was even arrested in September on charges relating to a lack of content moderation on Telegram, as well as his refusal to cooperate with police investigations.
So, where does this leave women in South Korea? In between fighting against the proliferation of illegal spy cams in public restrooms, the Burning Sun case—in which several celebrities and police officials were responsible for sexually assaulting women in a famous Gangnam nightclub—and the rise of deepfakes, women are forced to to remain in a constant defense position to protect their fundamental rights and freedoms. In this digital age, violence against women is not limited to the physical world. It is spreading at alarming rates online through the creation of deepfake sexual content that can only be considered as a violence to the existence and integrity of women and girls. In order to combat this disturbing situation, feminists in South Korea have taken on the heavy responsibility of defending women’s rights by adding pressure on the government to pass more strict legislation, aiming to create a more secure environment. However, the creation and circulation of deepfakes is not a matter that should only concern women, but rather society as a whole. And it might be easy, or even convenient, to let the blame fall on AI, instead of letting ourselves acknowledge that human beings are responsible for the creation of such dangerous and apaling “content”, or worse that they are rarely held accountable for the severe damage and trauma they have caused to victims.