EconomyWhat Is...GoogleMetaIntelOpenAIAnthropicCohereSiemensEurope · Belgium8 min read34.8k views

The AI Act's High-Risk Hurdles: Can Google and OpenAI Navigate Brussels' New Regulatory Maze?

As the EU AI Act begins its phased enforcement, companies like Google and OpenAI face stringent new obligations. This explainer dissects the 'high-risk' classification, its implications for global tech, and why Brussels has questions, and so should you, about compliance in a rapidly evolving landscape.

Listen
0:000:00

Click play to listen to this article read aloud.

The AI Act's High-Risk Hurdles: Can Google and OpenAI Navigate Brussels' New Regulatory Maze?
Michèl Lambertè
Michèl Lambertè
Belgium·May 14, 2026
Technology

The digital landscape, much like the cobbled streets of Brussels, is perpetually under construction, adapting to new demands and unforeseen challenges. Yet, unlike our city's infrastructure, the pace of AI development often outstrips the legislative capacity to govern it. This dynamic tension is precisely what the European Union's Artificial Intelligence Act seeks to address, and as its enforcement phases commence, a critical concept emerges for global technology firms: the 'high-risk' AI system.

What is a 'High-Risk' AI System?

At its core, a 'high-risk' AI system, as defined by the EU AI Act, is not merely an advanced algorithm; it is an AI application deemed to pose significant potential harm to people's health, safety, or fundamental rights. The Act categorizes these systems based on their intended purpose, not just their technical capabilities. This distinction is crucial. An AI system might be technically sophisticated but low-risk if its application is benign, such as a simple recommendation engine for streaming movies. Conversely, a less complex AI could be high-risk if deployed in a critical domain, like medical diagnosis or credit scoring.

The Act explicitly lists several areas where AI systems are presumed high-risk. These include AI used in critical infrastructure, such as traffic management or water supply, which could endanger life and health. It also covers AI in education and vocational training, particularly those systems used for assessing learning outcomes or for access to educational institutions, where bias could perpetuate discrimination. Employment, worker management, and access to self-employment, including recruitment and selection processes, are also designated high-risk. Furthermore, AI systems used in law enforcement, migration, asylum, and border control management, as well as those in the administration of justice and democratic processes, fall under this stringent classification. This comprehensive scope reflects a deep concern for societal impact, a hallmark of European regulatory philosophy.

Why Should You Care?

For companies operating or aspiring to operate within the EU, understanding and complying with the high-risk classification is not merely an administrative burden; it is a prerequisite for market access and continued operation. The stakes are considerable. Non-compliance can lead to substantial fines, potentially reaching up to 35 million euros or 7% of a company's global annual turnover, whichever is higher. For tech giants like Google, Meta, or OpenAI, these figures represent a tangible threat to their bottom line and reputation.

Beyond financial penalties, the Act aims to foster trust in AI. As a Belgian, I observe a pervasive skepticism regarding technology that operates without clear accountability. This legislation is a direct response to that sentiment. For citizens, this means greater protection against algorithmic discrimination, unfair treatment, and potential safety hazards. For businesses, it means a clear, albeit demanding, framework for responsible innovation. It is an attempt to cultivate a predictable legal environment, which, paradoxically, can be a catalyst for innovation rather than a hindrance. As Bruegel, the Brussels-based economic think tank, often highlights, regulatory clarity can reduce uncertainty for investors and developers alike.

How Did It Develop?

The journey to the EU AI Act has been a protracted one, reflecting the complexity of legislating a rapidly evolving technology. Discussions began in earnest around 2018, following the European Commission's publication of its AI strategy. The initial proposal for the AI Act was unveiled in April 2021, marking a significant step towards a comprehensive regulatory framework. The subsequent years involved intense negotiations among the European Parliament, the Council of the European Union, and the Commission, a process known as the 'trilogue'.

These discussions were characterized by vigorous debate, particularly concerning the scope of the Act, the definition of AI, and the treatment of general purpose AI models, such as those developed by OpenAI or Anthropic. The final text, provisionally agreed upon in December 2023 and formally adopted in early 2024, represents a compromise, but one that firmly establishes the EU as a global leader in AI regulation. It builds upon existing EU legal traditions, particularly those related to data protection, consumer safety, and fundamental rights, extending them into the algorithmic domain. This legislative journey underscores the EU's commitment to a human-centric approach to technology, distinguishing it from regulatory efforts in other major economies.

How Does It Work in Simple Terms?

Imagine you are a chocolatier in Bruges, renowned for your exquisite pralines. The EU has strict food safety regulations, ensuring your ingredients are traceable, your production methods hygienic, and your labeling transparent. The AI Act applies a similar logic to high-risk AI. Before you can sell your AI 'praline' in the EU, you must ensure it meets stringent 'safety' standards.

For high-risk AI systems, this means a multi-layered compliance process. First, developers must establish a robust risk management system, identifying and mitigating potential harms throughout the AI system's lifecycle. Second, data governance is paramount: the training data used must be of high quality, relevant, and free from biases that could lead to discriminatory outcomes. Third, technical documentation must be comprehensive, allowing authorities to assess compliance. Fourth, human oversight is mandated, ensuring that humans can effectively intervene and override automated decisions when necessary. Fifth, the system must achieve a high level of accuracy, robustness, and cybersecurity. Finally, a conformity assessment must be performed, often involving a third-party audit, before the system can be placed on the market or put into service. This is akin to obtaining a CE mark for industrial products, signifying compliance with EU standards.

Real-World Examples

Consider a few concrete scenarios where the high-risk classification will apply:

  1. Medical Diagnosis AI: An AI system developed by Siemens Healthineers that assists radiologists in detecting tumors from medical images would be high-risk. Its malfunction or bias could lead to misdiagnoses, directly impacting patient health. Such a system would require rigorous testing, clear documentation of its performance metrics, and a robust human oversight mechanism for clinicians.

  2. Credit Scoring Algorithms: A bank, perhaps KBC or BNP Paribas Fortis, using an AI system to assess loan applications. If this system disproportionately denies credit to certain demographic groups due to biases in its training data, it infringes upon fundamental rights. The bank would need to demonstrate that its AI is fair, transparent, and explainable, with clear recourse for applicants.

  3. Automated Recruitment Tools: A large multinational like Unilever or TotalEnergies deploying an AI tool to screen job applicants. If the AI inadvertently discriminates based on gender, age, or origin, it falls under the high-risk category. The company must ensure the system's fairness, provide clear explanations for its decisions, and allow for human review of outcomes.

  4. Public Safety Surveillance: AI systems used by law enforcement for predictive policing or facial recognition in public spaces. These applications are inherently high-risk due to their potential to infringe on privacy and fundamental freedoms. They face the strictest requirements, with some uses, like real-time remote biometric identification in public spaces, being outright prohibited except in very limited, clearly defined circumstances.

Common Misconceptions

One prevalent misconception is that the AI Act stifles innovation. Critics often argue that its stringent requirements will deter AI development in Europe. However, proponents, including many EU officials, contend that by establishing clear rules and fostering trust, the Act creates a stable environment for responsible innovation. "The EU's approach deserves more credit than it gets," remarked Thierry Breton, the EU Commissioner for Internal Market, in a recent address, emphasizing that legal certainty can be a competitive advantage. Furthermore, the Act explicitly exempts AI systems used solely for research, development, or for personal, non-commercial purposes, providing space for experimentation.

Another misunderstanding is that all AI is treated equally. The tiered approach, distinguishing between unacceptable risk, high-risk, limited risk, and minimal risk AI, is central to the Act's proportionality. Not every chatbot or recommendation algorithm will face the same regulatory scrutiny as an AI system controlling a nuclear power plant. The focus is on applications with significant potential for harm.

Finally, some believe the Act is purely theoretical, difficult to enforce in practice. However, the establishment of national supervisory authorities and the European Artificial Intelligence Board (eaib) demonstrates a clear commitment to implementation. These bodies will be responsible for market surveillance, conformity assessments, and ensuring compliance, supported by the threat of substantial penalties.

What to Watch for Next

The phased implementation of the AI Act began in May 2024, with prohibitions on certain AI systems taking effect six months later. The rules on general-purpose AI models, including foundational models like those from OpenAI (ChatGPT) and Google (Gemini), will apply twelve months after the Act's entry into force. The comprehensive obligations for high-risk AI systems will become fully applicable 36 months after entry into force, giving companies time to adapt. This staggered approach is pragmatic, allowing industry to prepare.

We must closely observe how the Eaib interprets and applies the high-risk criteria in practice. The devil, as always, will be in the details of implementation and the guidance provided to developers. The interaction between the AI Act and other existing EU legislation, such as the General Data Protection Regulation (GDPR) and sector-specific laws, will also be critical. Companies will need to navigate a complex web of regulations, ensuring coherence across their compliance efforts.

Furthermore, the global impact of the 'Brussels Effect' will be a key area of observation. Will the EU's regulatory model become a de facto global standard, compelling companies worldwide to align with its requirements to access the lucrative European market? This has been the case with GDPR, and it is a distinct possibility for the AI Act. As we move forward, the question is not just whether companies can comply, but how this comprehensive regulatory framework will shape the future trajectory of AI development, both within Europe and beyond. Brussels has questions, and so should you, about the delicate balance between innovation and regulation, a balance that will define our algorithmic future.

For further reading on the EU's digital policy, explore Reuters' technology section. To understand the broader implications of AI governance, MIT Technology Review offers insightful analysis. The practical challenges for businesses are often highlighted by TechCrunch.

For a deeper dive into the ethical considerations that often underpin such regulations, consider the article on Moscow's Quantum Gambit [blocked], which touches on the societal impact of advanced technologies.

Enjoyed this article? Share it with your network.

Related Articles

Michèl Lambertè

Michèl Lambertè

Belgium

Technology

View all articles →

Sponsored
AI CommunityHugging Face

Hugging Face Hub

The AI community building the future. 500K+ models, datasets & spaces. Open-source AI for everyone.

Join Free

Stay Informed

Subscribe to our personalized newsletter and get the AI news that matters to you, delivered on your schedule.