OpenAI’s new social app is filled with terrifying Sam Altman deepfakes

A video on OpenAI’s new TikTok-like social media app, Sora, depicts a never-ending factory farm of pink pigs grunting and snorting in their pens. Each pig is equipped with a feeding trough and a smartphone screen playing a feed of vertical videos. A terrifyingly realistic Sam Altman stares directly at the camera, as though making eye contact with the viewer. The AI-generated Altman asks, “Are my piggies enjoying their slop?” This is what it is like using the Sora app, less than twenty-four hours after it launched to the public in an invite-only early access period.

In the next video on Sora’s feed, Altman appears again. This time, he is standing in a field of Pokémon, where creatures like Pikachu and Bulbasaur are frolicking through the grass. The OpenAI CEO looks at the camera and says, “I hope Nintendo doesn’t sue us.” Then, there are many more fantastical yet realistic scenes, which often feature Altman himself. He serves Pikachu and Eric Cartman drinks at Starbucks. He screams at a customer from behind the counter at a McDonald’s. He steals NVIDIA GPUs from a Target and runs away, only to get caught and beg the police not to take his precious technology.

People on Sora who generate videos of Altman are especially amused by how blatantly OpenAI appears to be violating copyright laws. Sora will reportedly require copyright holders to opt out of their content’s use, reversing the typical approach where creators must explicitly agree to such use. The legality of this is debatable.

In one video, an AI-generated Altman states, “This content may violate our guardrails concerning third-party likeness,” echoing a notice that appears after submitting some prompts. Then, he bursts into hysterical laughter as though he knows what he is saying is nonsense. The app is filled with videos of Pikachu doing ASMR, Naruto ordering Krabby Patties, and Mario smoking weed.

This would not be such a problem if Sora 2 were not so impressive, especially when compared with the more mind-numbing content on the Meta AI app and its new social feed.

OpenAI fine-tuned its video generator to adequately portray the laws of physics, which makes for more realistic outputs. But the more realistic these videos become, the easier it will be for this synthetically created content to proliferate across the web, where it can become a vector for disinformation, bullying, and other nefarious uses.

Aside from its algorithmic feed and profiles, Sora’s defining feature is that it is basically a deepfake generator. That is how we got so many videos of Altman. In the app, you can create what OpenAI calls a cameo of yourself by uploading biometric data. When you first join the app, you are immediately prompted to create your optional cameo through a quick process where you record yourself reading off some numbers, then turning your head from side to side.

Each Sora user can control who is allowed to generate videos using their cameo. You can adjust this setting between four options: only me, people I approve, mutuals, and everyone. Altman has made his cameo available to everyone, which is why the Sora feed has become flooded with videos of Pikachu and SpongeBob begging Altman to stop training AI on them.

This has to be a deliberate move on Altman’s part, perhaps as a way of showing that he does not think his product is dangerous. But users are already taking advantage of Altman’s cameo to question the ethics of the app itself.

After watching enough videos of Sam Altman, one journalist decided to test the cameo feature. It is generally a bad idea to upload your biometric data to a social app, or any app for that matter. But they defied their best instincts for journalism and a bit of morbid curiosity.

Their first attempt at making a cameo was unsuccessful, and a pop-up said the upload violated app guidelines. After trying again, they realized the problem was a tank top; their shoulders were perhaps too risqué for the app’s liking. It is a reasonable safety feature designed to prevent inappropriate content, though they were fully clothed. After changing into a t-shirt, they created the cameo.

For a first deepfake, they decided to create a video of something they would never do in real life. They asked Sora to create a video in which they profess their undying love for the New York Mets. That prompt got rejected, probably for naming a specific franchise, so they instead asked Sora to make a video of them talking about baseball.

The resulting AI deepfake said, “I grew up in Philadelphia, so the Phillies are basically the soundtrack of my summers,” speaking in an unfamiliar voice but in a bedroom that looked exactly like theirs. They did not tell Sora they are a Phillies fan. But the Sora app can use your IP address and your ChatGPT history to tailor its responses, so it made an educated guess since the video was recorded in Philadelphia.

OpenAI already has a safety problem. The company is facing concerns that ChatGPT is contributing to mental health crises, and it is facing a lawsuit from a family who alleges that ChatGPT gave their deceased son instructions on how to kill himself. In its launch post for Sora, OpenAI emphasizes its supposed commitment to safety, highlighting parental controls and user control over cameos, as if it is not irresponsible to give people a free, user-friendly resource to create extremely realistic deepfakes. When you scroll through the Sora feed, you occasionally see a screen that asks, “How does using Sora impact your mood?” This is how OpenAI is embracing safety.

Already, users are navigating around the guardrails on Sora, something that is inevitable for any AI product. The app does not allow you to generate videos of real people without their permission, but when it comes to dead historical figures, Sora is a bit looser with its rules. No one would believe that a video of Abraham Lincoln riding a self-driving car is real, given it would be impossible without a time machine. But then you see a realistic looking John F. Kennedy say, “Ask not what your country can do for you, but how much money your country owes you.” It is harmless in a vacuum, but it is a harbinger of what is to come.

Political deepfakes are not new. Even former President Donald Trump posts deepfakes on his social media. But when Sora opens to the public, these tools will be at all of our fingertips, and we will be destined for disaster.