Chris Lehane is one of the best in the business at making bad news disappear. As Al Gore’s press secretary during the Clinton years and Airbnb’s chief crisis manager through regulatory challenges, Lehane knows how to spin a story. Now, two years into his role as OpenAI’s VP of global policy, he faces what might be his most impossible task yet. His job is to convince the world that OpenAI genuinely cares about democratizing artificial intelligence, even as the company increasingly behaves like every other tech giant that once claimed to be different.
I had twenty minutes with him on stage at the Elevate conference in Toronto this week. The goal was to get past the talking points and into the real contradictions undermining OpenAI’s carefully constructed image. It was not an easy or entirely successful endeavor. Lehane is genuinely good at his job. He is likable, sounds reasonable, and admits uncertainty. He even talks about waking up at 3 a.m. worried about whether any of this will actually benefit humanity.
But good intentions do not mean much when your company is subpoenaing critics, draining economically depressed towns of water and electricity, and bringing dead celebrities back to life to assert market dominance.
The company’s Sora problem is really at the root of everything else. The video generation tool launched last week with copyrighted material seemingly baked right into it. This was a bold move for a company already being sued by the New York Times, the Toronto Star, and much of the publishing industry. From a business and marketing standpoint, it was also brilliant. The invite-only app soared to the top of the App Store as people created digital versions of themselves, OpenAI CEO Sam Altman, characters like Pikachu and Mario, and dead celebrities like Tupac Shakur.
When asked what drove the decision to launch Sora with these characters, Lehane gave the standard pitch. He described Sora as a general purpose technology like electricity or the printing press, democratizing creativity for people without talent or resources. He even called himself a creative zero who can now make videos.
What he avoided addressing is that OpenAI initially let rights holders opt out of having their work used to train Sora, which is not how copyright typically works. Then, after noticing people enjoyed using copyrighted images, the company evolved toward an opt-in model. That is not really iterating. That is testing how much you can get away with. And though the Motion Picture Association made noise last week about legal threats, OpenAI appears to have gotten away with quite a lot.
Naturally, this brings to mind the frustration of publishers who accuse OpenAI of training on their work without sharing the financial rewards. When pressed about publishers being cut out of the economics, Lehane invoked fair use, the American legal doctrine meant to balance creator rights with public access to knowledge. He called it the secret weapon of U.S. tech dominance.
Maybe. But I recently interviewed Al Gore, Lehane’s old boss, and realized anyone could simply ask ChatGPT about it instead of reading my piece. I noted that while the approach is iterative, it is also a replacement.
For the first time, Lehane dropped his spiel. He said we are all going to need to figure this out. He admitted it is glib and easy to say we need new economic revenue models, but he thinks we will find them. In short, we are making it up as we go.
Then there is the infrastructure question nobody wants to answer honestly. OpenAI is already operating a data center campus in Abilene, Texas, and recently broke ground on a massive data center in Lordstown, Ohio, in partnership with Oracle and SoftBank. Lehane has likened AI accessibility to the advent of electricity, saying those who accessed it last are still catching up, yet OpenAI’s Stargate project is targeting these same economically challenged places to set up facilities with massive appetites for water and electricity.
When asked during our conversation whether these communities will benefit or merely foot the bill, Lehane spoke of gigawatts and geopolitics. He noted that OpenAI needs about a gigawatt of energy per week, and that China brought on 450 gigawatts last year plus 33 nuclear facilities. He argued that if democracies want democratic AI, they have to compete. The optimist in him says this will modernize our energy systems, painting a picture of a re-industrialized America with transformed power grids.
It was an inspiring vision. But it was not an answer about whether people in Lordstown and Abilene will watch their utility bills spike while OpenAI generates videos of John F. Kennedy and The Notorious B.I.G. It is worth noting that video generation is the most energy-intensive AI application there is.
This brought me to my most uncomfortable example. Zelda Williams spent the day before our interview begging strangers on Instagram to stop sending her AI-generated videos of her late father, Robin Williams. She wrote that people are not making art, but are making disgusting, over-processed hotdogs out of the lives of human beings.
When I asked how the company reconciles this kind of intimate harm with its mission, Lehane answered by talking about processes, including responsible design, testing frameworks, and government partnerships. He said there is no playbook for this stuff.
Lehane showed vulnerability in some moments, saying he wakes up at 3 a.m. every night worried about democratization, geopolitics, and infrastructure. He acknowledged the enormous responsibilities that come with this work.
Whether or not those moments were designed for the audience, I believe him. I left Toronto thinking I had watched a master class in political messaging, with Lehane threading an impossible needle while dodging questions about company decisions he may not even agree with. Then Friday happened.
Nathan Calvin, a lawyer who works on AI policy at the nonprofit advocacy organization Encode AI, revealed that at the same time I was talking with Lehane in Toronto, OpenAI had sent a sheriff’s deputy to his house in Washington, D.C., during dinner to serve him a subpoena. They wanted his private messages with California legislators, college students, and former OpenAI employees.
Calvin is accusing OpenAI of intimidation tactics around a new piece of AI regulation, California’s SB 53. He says the company weaponized its legal battle with Elon Musk as a pretext to target critics, implying Encode was secretly funded by Musk. In fact, Calvin says he fought OpenAI’s opposition to the AI safety bill and that when he saw the company claim it worked to improve the bill, he literally laughed out loud. In a social media post, he went on to call Lehane specifically the master of the political dark arts.
In Washington, that might be a compliment. At a company like OpenAI whose mission is to build AI that benefits all of humanity, it sounds like an indictment.
What matters much more is that even OpenAI’s own people are conflicted about what they are becoming. As my colleague reported last week, a number of current and former employees took to social media after Sora 2 was released to express their misgivings. This includes Boaz Barak, an OpenAI researcher and Harvard professor, who wrote that Sora 2 is technically amazing but that it is premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.
On Friday, Josh Achiam, OpenAI’s head of mission alignment, tweeted something even more remarkable about Calvin’s accusation. Prefacing his comments by saying they were possibly a risk to his whole career, Achiam wrote that OpenAI cannot be doing things that make it into a frightening power instead of a virtuous one. He stated the company has a duty and a mission for all of humanity, and the bar to pursue that duty is remarkably high.
That is significant. An OpenAI executive publicly questioning whether his company is becoming a frightening power instead of a virtuous one is not the same as a competitor taking shots or a reporter asking questions. This is someone who chose to work at OpenAI, who believes in its mission, and who is now acknowledging a crisis of conscience despite the professional risk.
It is a crystallizing moment. You can be the best political operative in tech, a master at navigating impossible situations, and still end up working for a company whose actions increasingly conflict with its stated values. These contradictions may only intensify as OpenAI races toward artificial general intelligence.
It has me thinking that the real question is not whether Chris Lehane can sell OpenAI’s mission. It is whether others, including critically the other people who work there, still believe it.

