The Rules Are Coming
EU the normative power
The EU wants to ensure AI is safe, ethical and respects fundamental rights by creating standards for the way it’s developed and used within its borders, and beyond
12 minIn May 2023, the European Union introduced the world’s first legislation to regulate artificial intelligence. The AI Act took effect on August 1, 2024, marking the first serious attempt to impose order on a sector many viewed as spiralling out of control. The new law has sparked significant repercussions—and criticism—from some of the key players in technological innovation. Its goal is to foster trustworthy AI both within Europe and globally, ensuring that AI systems respect fundamental rights, adhere to safety and ethical standards, and address the risks posed by highly powerful and impactful AI models.
The European regulation on artificial intelligence applies only to areas within the scope of EU law and includes exemptions, such as for systems used solely for military and defense purposes, as well as for research activities.
A risk-based approach
The regulatory framework follows a ‘risk-based’ approach: the higher the potential harm to society, the stricter the regulations. AI systems are classified into four risk levels: unacceptable, high, limited, and minimal. Any AI system deemed to pose a clear threat to people’s safety, livelihoods, or rights is prohibited, ranging from government social scoring to voice-assisted toys that promote dangerous behavior.
The high-risk category includes technologies used in critical infrastructure (e.g., transport) that could endanger lives or health; education and vocational training, which can affect a person’s access to opportunities (e.g., exam scoring); product safety components (e.g., AI in robot-assisted surgery); employment, worker management, and access to self-employment (e.g., CV screening software for recruitment); essential public and private services (e.g., credit scoring that can deny loans); law enforcement activities that could infringe on fundamental rights (e.g., assessing the reliability of judicial evidence); migration, asylum, and border control management (e.g., automated visa application reviews); and the administration of justice and democratic processes (e.g., AI tools for researching judicial decisions).
All of these systems must meet strict requirements before being allowed on the market. This includes thorough risk assessment and mitigation measures, ensuring high-quality datasets to minimize discriminatory outcomes, and registering activities to guarantee traceability. Detailed documentation must also be provided, outlining the system’s purpose to allow authorities to assess compliance. Operators must receive clear and adequate information, and human oversight must be ensured. Additionally, systems are required to maintain a high level of robustness, security, and accuracy.
All remote biometric identification systems are classified as high risk and subject to strict regulations. The use of remote facial recognition in publicly accessible spaces for law enforcement is generally prohibited.
Limited exceptions are strictly defined and regulated, such as when necessary to search for a missing child, prevent an imminent terrorist threat, or identify and prosecute the perpetrator or suspected perpetrator of a serious crime. These uses require authorization from a judicial or other independent body and must adhere to strict limits regarding time, geographical scope, and the databases consulted.
The risks posed by a lack of transparency in AI use are considered limited. The AI Act introduces specific transparency requirements to ensure that people are informed when necessary, fostering trust. For instance, when interacting with AI systems like chatbots, users must be made aware that they are communicating with a machine, allowing them to decide whether to continue or stop.
Providers must also ensure that AI-generated content is clearly identified as such. Furthermore, any AI-generated text published to inform the public on matters of public interest must be labeled as artificially produced. This requirement also applies to deepfake audio and video content.
The AI Act permits the unrestricted use of minimal-risk AI systems, such as AI-powered video games or spam filters.
The AI pact
The law, which took effect on August 1, will be fully applicable two years later, with some exceptions: bans will come into force after six months, governance rules and obligations for general-purpose AI models will apply after 12 months, and rules for AI systems integrated into regulated products will take effect after 36 months.
Violators of the prohibitions outlined in the regulation will face fines of up to €35 million or 7 percent of their annual global turnover, whichever is higher.
To ease the transition to the new regulatory framework, the European Commission has introduced the AI Pact, a voluntary initiative to support the implementation of the AI Act. The Pact encourages AI developers in Europe and beyond to pledge compliance with the key obligations of the Act ahead of its full enforcement. As of September 25, over 100 companies, including Amazon, Google, Microsoft, OpenAI, and Palantir, have signed the pledges. However, there is a notable absence of Apple, Meta, and X, which view European regulations as a hindrance to their business operations. These companies have even delayed the launch of some of their latest-generation applications in the European market, pending clearer guidance on the obligations they face. Specifically, Apple has withheld the release of its new AI-based features—Apple Intelligence—on the latest iPhone in the EU. Meanwhile, X has been in conflict with the European Commission for some time over the rules of the Digital Services Act (DSA), which governs web platforms.
The European Commission views artificial intelligence not just as something to be controlled or restricted, but as an area ripe for investment. In November 2023, the Commission, along with the High Performance Computing Joint Undertaking (EuroHPC JU), committed to broadening access to the EU’s world-class supercomputing resources. This initiative, part of the EU AI Start-Up Initiative, aims to empower European AI start-ups, SMEs, and the wider AI community.
The European Union is a global leader in supercomputing. Through EuroHPC, three of the EU’s supercomputers—Leonardo, Lumi, and MareNostrum5—rank among the best in the world.
The Von Der Leyen and Draghi Agendas
“Data and AI are the ingredients for innovation that can help us to find solutions to societal challenges, from health to farming, from security to manufacturing. In order to release that potential, we have to find our European way, balancing the flow and wide use of data while preserving high privacy, security, safety and ethical standards. We already achieved this with the General Data Protection Regulation, and many countries have followed our path. In my first 100 days in office, I will put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.” This was Ursula von der Leyen’s pledge in her agenda for a second term as President of the European Commission, presented to the European Parliament last July. “This [legislation] should also explore how we can harness big data for innovations that generate wealth for our societies and businesses. I will ensure that we prioritize investment in artificial intelligence, both through the Multiannual Financial Framework and by expanding public-private partnerships,” she emphasized.
The issue was also highlighted by former ECB President and former Italian Prime Minister Mario Draghi in his report, The Future of European Competitiveness.
“[A] critical issue for Europe will be integrating new technologies like artificial intelligence into our industrial sector. AI is improving incredibly fast, as the latest models released in the last few days show. We need to shift our orientation from trying to restrain this technology to understanding how to benefit from it. The cost of training frontier AI models is still high, which is a barrier for companies in Europe that don’t have the backing of US big tech firms. But, on the other hand, the EU has a unique opportunity to lower the cost of AI deployment by making available its unique network of high-performance computers,” Draghi said as he presented the paper to the European Parliament in September. “The report recommends increasing the capacity of this network and expanding access to start-ups and industry. Many industrial applications of AI do not require the latest advances in generative AI, so it’s well within our reach to accelerate AI uptake with a concerted effort to support companies. That said, the report recognizes that technological progress and social inclusion do not always go together. Major transitions are disruptive. Inclusion hinges on everyone having the skills they need to benefit from digitalization. So, while we want to match the United States on innovation, we must exceed the US on education and adult learning. We therefore propose a profound overhaul of Europe’s approach to skills, focused on using data to understand where skills gaps lie and investing in education at every stage. For Europe to succeed, investment in technology and in people cannot substitute for each other. They must go hand in hand,” Draghi added.
The “Brussels effect” is also making its mark on AI. On September 5, the Council of Europe signed the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law—the first legally binding international treaty designed to ensure that AI systems align with these principles. The treaty provides a comprehensive legal framework that covers the entire AI lifecycle, aiming to foster innovation while managing the risks AI may pose to human rights, democracy, and the rule of law. To ensure its longevity, the framework is intentionally technology neutral.
The Framework Convention was adopted by the Council of Europe’s Committee of Ministers on May 17, 2024. It was negotiated by the 46 member states of the Council of Europe, the European Union, all observer states (Canada, Japan, Mexico, the Holy See, and the United States), and six non-member states (Argentina, Australia, Costa Rica, Israel, Peru, and Uruguay). Representatives from the private sector, civil society, and academia also played an active role in shaping the convention.