Skip to main content
Deepfakes, Cheapfakes, Primarily Harm Minoritized Groups, New Rutgers Study Finds
Audiovisual manipulation created for false personation primarily harms people in minoritized groups and serves to benefit those who are already powerful.
Audiovisual manipulation created for false personation primarily harms people in minoritized groups and serves to benefit those who are already powerful.

Deepfakes, Deepfake Porn, Fake Videos, Cheapfakes, Shallowfakes, and other examples of audiovisual manipulation for false personation are disproportionately developed and used to harm women, specifically women of color, LGBTQIA individuals, and those questioning power. Alternately, they overwhelmingly benefit those who are already powerful, such as right-wing political parties, entertainment companies, PR firms, social media companies, and wealthy individuals, according to new Rutgers research.

The study, “Configuring Fakes: Digitized Bodies, the Politics of Evidence, and Agency,” by Assistant Professor of Library and Information Science Britt Paris, in the Rutgers School of Communication and Information, was published in the journal Social Media and Society.

Image-based sexual violence has been used for many purposes, including harassment, suppression of the press, civil society, and political opposition."

The study explores the state of audiovisual manipulation through artificial intelligence (AI), identifies the methods used to produce them, and discusses the harm caused both by these false, misleading, and misrepresented productions and the algorithmic technology used to share them as fact. The study also offers suggestions for mitigating the harm through political rather than technical solutions.

Paris said, “Image-based sexual violence has been used for many purposes, including harassment, suppression of the press, civil society, and political opposition. For decades around the world, audiovisual content has been manufactured and manipulated to silence opposition. Today faked sex videos are disseminated on social media with dire, and sometimes violent, consequences. Primary targets have been and continue to be LGBT politicians, women politicians or activists, especially women of color, and those questioning power.”

The advent of amateur image manipulation software, smartphone cameras, and social media has given anyone with an email address the ability to generate and disseminate content, Paris said. Thus, an increasingly volatile information sphere has emerged. Paris contends that mitigating their harm will require a deliberate reconfiguration of technologies that will address the power the public has been given to use these technologies.

“The current problems do not emanate from the existence of the new technologies but, rather, how these technologies are built and deployed by a few, then used and interpreted by the wider public. At the core, technology is nothing without people; thus, it can be changed,” Paris said.

Examples Paris analyzed for the study, she said, “suggest that manifestations of audiovisual manipulation for false personation both strengthen and are shaped by structural power — Deepfakes were used by BJP in India to target specific language demographics, and cheapfakes were used in Australia, India, and the Philippines to silence, coerce, and harass a young woman, a woman journalist, and a closeted gay economic minister. In the U.S., effects experts and technology companies can make enormous amounts of money from producing applications that can produce viral video filters, deepfakes, and holograms for entertainment or enjoyment; amateurs can use rudimentary deepfake technology to ‘pornify’ their classmates, coworkers, and anyone they choose.

“The current problems do not emanate from the existence of the new technologies but, rather, how these technologies are built and deployed by a few, then used and interpreted by the wider public."

“These examples show how stakeholders situated differently within existing social structures participate in the politics of evidence online—who gets to make these decisions and enjoy the benefits that flow from these decisions and who is left out of these discussions and exploited.”

Paris analyzed over 200 examples of audiovisual manipulation, derived from five years of data (2016 to 2021), that disproportionately affected women, people of color, and lesbian, gay, bisexual, and queer and/or questioning, intersex, and asexual and/or ally (LGBTQIA), and collected and annotated them with their social, political, evidentiary, forensic, and aesthetic dimensions.  

The study includes a chart that categorizes the methods used to produce and disseminate audiovisual content featuring false personation as well as the harms that resulted.

Her findings also reveal that the images that are the most technically challenging to make, such as Deepfakes, cause the least amount of harm because they require extensive technical resources and sophisticated machine learning techniques. Alternatively, the most harmful instances of audiovisual false impersonation she found in the study were not technologically sophisticated.

The period Paris collected data was a critical time in the history of audiovisual manipulation, she said, because it was when the public first gained access to the technologies used to create deepfakes. Prior to 2017, Paris said, only major motion picture studios had begun to generate realistic images from existing video by using computer graphics systems fed by data systems.

However, Paris said, this all changed beginning in 2017 when “consumer-grade and sometimes free, image manipulation software using machine learning gained public attention as pornographic videos with the faces of famous women grafted onto pornographic actors’ bodies appeared on Reddit. Then, an application (app) for creating deep nudes of anyone’s picture was developed and made widely accessible.”

The most harmful instances of audiovisual false impersonation Paris found in the study were not technologically sophisticated.

Paris said when the app developers realized the harm that could result from the public’s access to these technologies, they shut down the app. However, “just one year later, manipulated nude images, many featuring teenaged girls, generated by the app mysteriously appeared on encrypted Telegram conversations.”

Analyzing the contextual elements of these videos around harm and expertise demonstrates how the politics of evidence—who gets to define the terms of acceptable use, official policy, whose creative play is painted as harmless and who that affects, and who has the time,
know-how, and resources to refute or redress harms—are shaped by the already powerful and negatively impacts those individuals who are silenced, coerced, and made most vulnerable in systems of structural inequality, Paris said. 

While laws that seek to protect victims of audiovisual manipulation have been created, Paris noted, “regulatory contexts vary by country and are difficult to enforce across geopolitical borders. Furthermore, relying on corporate benevolence is a slippery slope, as the primary value of these companies is the economic bottom line, not the social good or the public interest.”

Discover more about the Library and Information Science Department at the Rutgers School of Communication and Information on the website

 

 

Back to top