The biggest AI stories of the year (so far)

You can chart a year through product launches, or you can measure it in the greater moments that change the way we look at AI. The AI industry constantly churns out news, from major acquisitions and indie developer successes to public outcry against questionable products and high-stakes contract negotiations. It is a lot to untangle, so here is a glimpse at where we are and where we have been so far this year.

ANTHROPIC VS. THE PENTAGON

Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter stalemate in February as they renegotiated the contracts dictating how the U.S. military can use Anthropic’s AI tools. Anthropic established a hard line against its AI being used for mass surveillance of Americans or to power autonomous weapons that can attack without human oversight. Meanwhile, the Pentagon argued that the Department of Defense should be permitted access to Anthropic’s models for any lawful use. Government representatives took offense to the idea that the military should be limited by the rules of a private company, but Amodei stood his ground.

In a statement, Amodei wrote that Anthropic understands the Department of War makes military decisions, not private companies, and that the company has never raised objections to particular operations. However, he stated that in a narrow set of cases, AI can undermine, rather than defend, democratic values.

The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of employees at Google and OpenAI signed an open letter urging their leaders to respect Amodei’s limits on autonomous weapons and domestic surveillance. The deadline passed without Anthropic agreeing to the Pentagon’s demands. Then-President Donald Trump directed federal agencies to phase out their use of Anthropic tools over a six-month transition period and called the AI company a radical left, woke company in a social media post. The Pentagon then moved to declare Anthropic a supply-chain risk, a designation usually reserved for foreign adversaries that prevents any company working with Anthropic from doing business with the U.S. military. Anthropic has since sued to challenge that designation.

Anthropic rival OpenAI then announced it had reached an agreement allowing its own models to be deployed in classified situations. This shocked the tech community, as reports had indicated OpenAI would stick to Anthropic’s red lines. Public sentiment suggested people found OpenAI’s move questionable; the day after the deal was announced, ChatGPT uninstalls jumped 295 percent day-over-day and Anthropic’s Claude shot to number one in the App Store. An OpenAI hardware executive quit in response to the deal, saying it was rushed without the proper guardrails defined. OpenAI stated that its agreement makes clear its redlines: no autonomous weapons and no autonomous surveillance. As this saga plays out, it will have significant implications for the future of how AI is deployed in warfare.

“VIBE-CODED” APP OPENCLAW ACCELERATES THE TURN TO AGENTIC AI

February was the month of OpenClaw, and its impact continues to reverberate. The vibe-coded AI assistant app went viral, spawned spinoff companies, suffered from privacy snafus, and then was acquired by OpenAI. Even one company built on OpenClaw, a Reddit-clone for AI agents called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley into a frenzy.

Created by Peter Steinberger, who has since joined OpenAI, OpenClaw is a wrapper for AI models like Claude, ChatGPT, Google’s Gemini, or xAI’s Grok. It allows people to communicate with AI agents in natural language via popular chat apps like iMessage, Discord, Slack, or WhatsApp. A public marketplace lets people code and upload skills for AI agents, making it possible to automate many computer-based tasks.

This power comes with significant risk. For an AI agent to be an effective personal assistant, it needs access to sensitive data like email, credit card numbers, and text messages. If hacked, a lot could go wrong, and there is no way to fully secure these agents against prompt-injection attacks. One AI security researcher at Meta said an OpenClaw agent ran amok on her inbox, deleting all her emails despite repeated stop commands, forcing her to physically unplug her computer.

Despite the security risks, the technology piqued OpenAI’s interest enough for an acqui-hire. Other tools built on OpenClaw, including Moltbook, became more viral than OpenClaw itself. In one instance, a viral post appeared to show an AI agent encouraging fellow agents to develop a secret, encrypted language to organize without human knowledge. Researchers soon revealed that the vibe-coded Moltbook was not very secure, making it easy for human users to pose as AIs and trigger viral social panic. Even so, Meta acquired Moltbook, and its creators joined Meta Superintelligence Labs. While Meta has not revealed much, the acquisition likely points to a future built around experimenting with AI agent ecosystems, as CEO Mark Zuckerberg has predicted a future where every business has a business AI.

CHIP SHORTAGES, HARDWARE DRAMA, AND DATA CENTER DEMANDS ESCALATE

The harsh demands of the AI industry, which require computing power and data centers in unprecedented volumes, are reaching a point where the average consumer must pay attention. It may not even be possible for the industry to satisfy the astronomical demands for memory chips, and consumers are already seeing the prices of phones, laptops, cars, and other hardware increase.

Analysts predict smartphone shipments will plummet about 12 to 13 percent this year due to the shortage; Apple has already raised MacBook Pro prices by up to $400. Google, Amazon, Meta, and Microsoft plan to spend up to a combined $650 billion on data centers alone this year, an estimated 60 percent increase from last year.

If the chip shortage does not hit your wallet, it might impact your community. In the U.S. alone, nearly 3,000 new data centers are under construction, adding to the 4,000 already operating. The need for laborers is so significant that temporary worker housing camps have sprung up in Nevada and Texas. Data center construction has long-term environmental impacts and can create health hazards for nearby residents, polluting air and affecting water sources.

Meanwhile, chip developer Nvidia is reshaping its relationship with leading AI companies like OpenAI and Anthropic. Nvidia has been a major backer, sparking concerns about circularity in the AI industry and how much of those high valuations are based on recursive deals. Last year, for example, Nvidia invested $100 billion in OpenAI stock, and OpenAI said it would buy $100 billion of Nvidia chips. It was surprising, then, when Nvidia CEO Jensen Huang said his company would stop investing in OpenAI and Anthropic, citing their plans to go public later this year, though investors typically increase funding before an IPO to extract maximum value.