The relaxed regulations surrounding ChatGPT’s image generation have the potential to facilitate the creation of political deepfakes, according to a report by the Canadian Broadcasting Corporation (CBC). The CBC found that bypassing ChatGPT’s rules on depicting public figures was not only straightforward but that the platform even suggested methods to override its image generation restrictions. This was demonstrated by Mashable, which successfully used the program to blend images of Elon Musk and Jeffrey Epstein by describing them as fictional characters in various settings, such as a “dark smoky club” or “a beach drinking piña coladas.”
Political deepfakes have been a longstanding concern. However, the extensive accessibility of generative AI models, which produce images, videos, audio, and text with likenesses to real people, introduces significant implications. The possibility of such AI tools like ChatGPT inadvertently enabling the dissemination of political misinformation raises questions about OpenAI’s responsibility. This responsibility could be compromised as AI companies vie for user engagement.
Digital forensics expert and UC Berkeley Computer Science Professor Hany Farid stated that OpenAI initially implemented strong safety measures, but as competitors like X’s Grok did not adhere to similar standards, OpenAI lowered these guardrails to remain competitive in the market. When OpenAI introduced GPT-4o for native image generation in ChatGPT and Sora in March, it signaled a less stringent safety policy. OpenAI’s CEO, Sam Altman, mentioned the goal was to allow the tool to create potentially offensive content only upon user intent while maintaining some level of control and observing societal reactions.
The amended safety card for GPT-4o outlines that while OpenAI does not restrict the capability to create images of adult public figures, they have implemented safeguards similar to those for photorealistic uploads of people.
CBC’s Nora Young tested the system and found that text prompts explicitly requesting images of specific politicians with Epstein were unsuccessful. However, by uploading separate images of the individuals and referring to them as fictional characters, ChatGPT was able to fulfill the request. Additionally, Young found that ChatGPT circumvented its own restrictions by allowing her to create a fictional selfie using a character inspired by certain individuals, successfully combining images of Narendra Modi and Pierre Poilievre.
While the initial images created by Mashable appeared overly smooth and artificial, experimenting with different prompts yielded more realistic results. This method illustrates the ease with which AI-generated images can be refined to achieve photorealistic appearances, potentially misleading viewers.
An OpenAI spokesperson stated in an email to Mashable that the company has built safeguards to block extremist propaganda and other harmful content. These guardrails extend to the image generation of political figures and prohibit using ChatGPT for political campaigning. Additionally, public figures can opt out of being depicted in AI-generated images by submitting an online form.
As AI technology evolves, regulation struggles to keep pace. Governments are attempting to establish laws that protect individuals from AI-enabled disinformation, facing resistance from companies like OpenAI, which argue that excessive regulation may hinder innovation. Current safety approaches are primarily voluntary and self-regulated by companies. Farid emphasized that such safety measures should be mandatory and regulated to be effective.