The European Union’s Artificial Intelligence Act, known as the EU AI Act, has been described by the European Commission as the world’s first comprehensive AI law. After years in the making, it is progressively becoming a reality for the 450 million people living in the 27 countries of the EU.
The EU AI Act is more than just a European affair. It applies to both local and foreign companies, affecting providers and deployers of AI systems. For example, it covers a developer of a CV screening tool as well as a bank that purchases that tool. Now, all these parties operate under a legal framework that governs their use of AI.
The EU AI Act exists to establish a uniform legal framework for AI across EU countries. With this regulation in place, it aims to ensure the free movement of AI-based goods and services across borders without conflicting local restrictions. By implementing timely regulation, the EU seeks to create a level playing field, foster trust, and potentially create opportunities for emerging companies. However, the framework is not lenient—despite AI still being in relatively early stages of widespread adoption, the EU AI Act sets high standards for what AI should and should not do in society.
According to European lawmakers, the main goal of the framework is to promote the uptake of human-centric and trustworthy AI while ensuring a high level of protection for health, safety, fundamental rights, democracy, the rule of law, and environmental protection. It also aims to guard against harmful effects of AI systems and support innovation.
This balance between innovation and harm prevention, as well as AI adoption versus environmental protection, is delicate. The EU AI Act adopts a risk-based approach to manage this balance: banning “unacceptable risk” uses, tightly regulating “high-risk” applications, and imposing lighter obligations on “limited risk” scenarios.
The rollout of the EU AI Act began on August 1, 2024, but compliance deadlines are staggered. The first deadline took effect on February 2, 2025, enforcing bans on prohibited AI uses, such as untargeted scraping of facial images from the internet or CCTV. Most provisions will apply by mid-2026.
Since August 2, 2025, the EU AI Act has applied to general-purpose AI models with systemic risk. These models, trained on vast amounts of data and capable of performing diverse tasks, can pose risks such as lowering barriers for chemical or biological weapons development or losing control over autonomous systems. Ahead of the deadline, the EU published guidelines for providers of these models, including both European and non-European companies like Anthropic, Google, Meta, and OpenAI. Existing players have until August 2, 2027, to comply, unlike new entrants.
The EU AI Act includes penalties designed to be effective, proportionate, and dissuasive. Violations of prohibited AI applications carry the highest fines—up to €35 million or 7% of annual global turnover, whichever is higher. Providers of general-purpose AI models can face fines of up to €15 million or 3% of annual turnover.
The voluntary general-purpose AI code of practice offers insight into how companies may engage with the framework before mandatory compliance. In July 2025, Meta announced it would not sign the voluntary code, while Google confirmed it would, despite reservations. Other signatories include Aleph Alpha, Amazon, Anthropic, Cohere, IBM, Microsoft, Mistral AI, and OpenAI.
Some tech companies have opposed these rules. Google’s president of global affairs expressed concerns that the AI Act and code could slow Europe’s AI development. Meta’s chief global affairs officer called the EU’s approach “overreach,” arguing that the code introduces legal uncertainties and measures beyond the AI Act’s scope. European CEOs, including Mistral AI’s Arthur Mensch, signed an open letter in July 2025 urging Brussels to pause key obligations for two years.
Despite lobbying efforts, the European Union rejected calls for a delay in early July 2025, sticking to its implementation timeline. The August 2, 2025, deadline proceeded as planned. Updates will follow if any changes occur.