The EU AI Act: Regulating Artificial Intelligence in the Name of Trust, Safety, and Fundamental Rights

By Margaux De Foy

Artificial intelligence is no longer a distant or abstract technology. It is embedded in our daily lives, shaping how we communicate, work, and even how decisions about us are made. From facial recognition systems to algorithmic recruitment tools and generative AI models, the rapid expansion of AI has raised urgent questions about accountability, transparency, and human rights. In response to these concerns, the European Union has taken a historic step by adopting the AI Act, the world’s first comprehensive legal framework regulating artificial intelligence

 A Risk-Based Approach to AI Regulation

The basic premise of the AI Act is a risk-based classification that will regulate AI systems based on the potential harm they could cause to individuals and society as a whole. The Regulation does not intend to prohibit or greatly restrict all AI technologies; it will differentiate between four separate classifications of AI systems as either Unacceptable risk, High risk, Limited risk, or Minimal risk. AI systems classified as having Unacceptable risks are prohibited outright. This includes any technology that utilizes manipulation or deception, exploits already vulnerable individuals due to age, disability, or any other topic related or utilizes social scoring on individuals. Moreover, the Act prohibits the indiscriminate scraping of facial images and the use of emotion-recognition systems in the workplace and/or schools, unless that specific technology is being used for a very narrow purpose, such as safety or medical purposes. The prohibition against these forms of AI signals a clear public policy position that the use of certain AI technologies is fundamentally incompatible with human rights and democratic values.

The Regulatory Core of the AI Act for High Risk AI

The EU AI Act is mainly directed at high extended risks of AI systems which can be used but require strict regulation and oversight. These high-risk AI systems are defined by the EU Safety legislation, and also comprise the sensitive domains found in Annex III to the Act. These sensitive domains primarily consist of Education, Employment, Access to Essentials, the Criminal Justice System, Migration/Border Control, and Administration of Justice.

AI systems classified as High-Risk include applications or systems that filter job candidates, evaluate credit-worthiness for financial loans, evaluate applications for asylum seekers (i.e., Refugees), or assist say with law enforcement in relation to Criminal Investigations. i.e., (Prosecutors, Detectives) etc, as these systems could have far-reaching implications for individuals.

Providers of these types of highly extended risk AI Systems will be mandated to comply with very strict obligations throughout the complete AI lifecycle. Obligations include requirements to provide a/risk management systems, ensure that training data is complete and representative, maintain documentation of relevant technical information, and enable human oversight to guarantee accuracy, robustness (i.e., performance), and Cybersecurity Compliance.

Through an extension of these requirements to non-EU/Affiliated Providers of such systems, these requirements will not only create uniform requirements within the EU, but also reflect the EU’s ambitions to align with and contribute to the establishment of International Governance standards related to AI implementations.

Limited and Minimal Risk AI

Some AI systems do not have to comply with all of the regulatory requirements. Chatbots and deepfake technology, for example, are classified as limited-risk AI systems and therefore are subject to certain transparency requirements. For instance, users must be aware that they are using chatbots instead of conversing with a person. Minimal-risk systems for AI, on the other hand, include a large number of consumer-based applications such as video games powered by AI and spam filters, and as such, they do not have to abide by any of the regulations regarding AI. This proportionality-based approach helps to promote Innovation in AI while at the same time ensuring that where the risks of AI are the highest, there are also measures in place to mitigate those risks.

General-Purpose AI and Systemic Risk

One of the most forward-looking aspects of the AI Act is its treatment of general-purpose AI (GPAI), including large-scale models capable of performing a wide range of tasks. All GPAI providers must publish technical documentation, comply with EU copyright rules, and disclose summaries of training data

However, GPAI models deemed to present systemic risk, based on computational power thresholds or demonstrated high-impact capabilities, face additional obligations. These include conducting adversarial testing, mitigating systemic risks, reporting serious incidents, and ensuring strong cybersecurity measures. This reflects growing concerns about the societal and geopolitical implications of powerful AI models, from misinformation to large-scale automation and security threats.

Governance and Enforcement

A new AI Office has been established to supervise the regulatory structure of the EU and its enforcement, as well as to oversee compliance of GPAI suppliers. The AI Office will work alongside National Authorities in overseeing and monitoring High-Risk AI Systems and deployers or users of AI technologies in a professional context, but the deployers or users of AI have a considerably lesser level of responsibility than do the developers of High-Risk AI Systems.

There are also provisions regarding timelines that clearly delineate when certain activities will be discontinued: prohibiting certain actions for six months; establishing GPAI obligations after one year; and requiring compliance with high-risk AI systems within two to three years based on classification. Phasing these deadlines allows time for organizations and companies to adjust, while also holding them accountable.

 Conclusion

In addition to being a regulatory tool, the EU AI Act is a statement of intent. By emphasizing human dignity and fundamental rights along with democratic oversight, the EU has firmly established itself as the global leader with respect to ethical governance of AI. However, though some issues remain to be addressed, such as enforcing regulations, the costs associated with developing innovative technology, and coordinating efforts internationally, the Act creates a strong foundation from which to build upon.

As AI continues to reshape how nations protect themselves and compete economically and govern themselves, it is clear that the EU is interested in developing its AI framework as a way to create an environment that promotes both technical advancement and social responsibility. By acknowledging that regulations will not slow down the pace of advancement but instead will create an environment where individuals feel safe using AI technology, the EU AI Act will accelerate the development of AI technologies that can thrive in an increasingly automated world.