Several current and former OpenAI researchers are expressing their views on the company’s first social media app, Sora. This new app is a TikTok-style feed filled with AI-generated videos and a significant number of Sam Altman deepfakes. The researchers are sharing their concerns on the platform X, and they appear divided on how this launch aligns with OpenAI’s nonprofit mission to develop advanced AI that benefits humanity.
OpenAI pretraining researcher John Hallman stated that AI-based feeds are scary and admitted he felt concern upon learning about the Sora 2 release. However, he also said the team did the absolute best job possible in designing a positive experience and is committed to ensuring AI helps and does not hurt humanity.
Another OpenAI researcher and Harvard professor, Boaz Barak, replied that he shares a similar mix of worry and excitement. He noted that while Sora 2 is technically amazing, it is premature to congratulate themselves on avoiding the pitfalls of other social media apps and deepfakes.
Former OpenAI researcher Rohan Pandey used the discussion to promote his new startup, Periodic Labs. This startup is composed of former AI lab researchers who are trying to build AI systems for scientific discovery. He invited those who do not want to build what he called the infinite AI TikTok slop machine to join his company. There were many other posts from researchers expressing similar sentiments.
The launch of Sora highlights a core tension for OpenAI that resurfaces repeatedly. The company is the fastest-growing consumer tech company on Earth, but it is also a frontier AI lab with a lofty nonprofit charter. Some former employees argue that the consumer business can, in theory, serve the mission by funding AI research and distributing the technology widely, as seen with ChatGPT.
OpenAI CEO Sam Altman addressed why the company is allocating significant capital and computing power to an AI social media app. He stated that the company mostly needs capital to build AI that can do science and is focused on AGI with almost all of its research effort. He also said it is nice to show people cool new technology, make them smile, and hopefully make some money given the immense computational needs.
This situation raises a critical question: at what point does OpenAI’s consumer business overtake its nonprofit mission? When does the company say no to a money-making, platform-growing opportunity because it conflicts with its founding principles? This question is especially relevant as regulators scrutinize OpenAI’s transition to a for-profit structure, which it needs to raise additional capital and eventually go public.
California Attorney General Rob Bonta said last month that he is particularly concerned with ensuring that the stated safety mission of OpenAI as a nonprofit remains front and center during this restructuring. Skeptics have dismissed OpenAI’s mission as merely a branding tool to attract talent from Big Tech, but many company insiders insist it is central to why they joined.
Currently, Sora’s footprint is small as the app is very new. However, its debut marks a significant expansion of OpenAI’s consumer business and exposes the company to the same incentives that have plagued social media apps for decades. Unlike ChatGPT, which is optimized for usefulness, Sora is built for fun as a place to generate and share AI clips. Its feed is similar to TikTok or Instagram Reels, platforms known for their addictive nature.
OpenAI insists it wants to avoid these pitfalls. The company claims that concerns about doomscrolling, addiction, isolation, and engagement-optimized feeds are top of mind. It explicitly states it is not optimizing for time spent on the feed and instead wants to maximize creation. The company plans to send reminders to users who have been scrolling too long and will primarily show them content from people they know.
This starting point is stronger than that of Meta’s recently released Vibes, another AI-powered short-form video feed that seemed to launch with fewer safeguards. As a former OpenAI policy leader pointed out, it is possible there will be both good and bad applications of AI-video feeds, similar to what has been observed with chatbots.
Still, as Sam Altman has long acknowledged, no one sets out to build an addictive app; the incentives of running a feed often guide them toward it. OpenAI has already encountered issues with sycophancy in ChatGPT, which the company says was an unintentional result of its training techniques.
In a previous podcast, Altman discussed what he calls the big misalignment of social media. He stated that a big mistake of the social media era was that feed algorithms had a number of unintended negative consequences on society, even though they were doing what someone thought users wanted in the moment by keeping them on the site.
It is too soon to tell how well the Sora app aligns with its users or OpenAI’s mission. Users are already noticing engagement-optimizing techniques in the app, such as dynamic emojis that appear when a video is liked, which feel designed to provide a dopamine hit for engagement.
The real test will be how OpenAI evolves Sora. Given how much AI has already taken over regular social media feeds, it seems plausible that AI-native feeds could soon become mainstream. Whether OpenAI can grow Sora without repeating the mistakes of its predecessors remains to be seen.

