Since the AI Act came into force in August 2024, and with its first application on February 2nd banning certain AI systems, it has raised many questions and concerns. 

The European governing bodies created this law to prevent AI-related abuses, protect citizens, and uphold individual rights. It enforces strict rules for different uses of these emerging technologies. 

For industry stakeholders and AI providers, complying with the AI Act is a major challenge and is often seen as a real obstacle to innovation. This puts European companies at a disadvantage compared to competitors who are less constrained by such legal requirements. 

Does complying with the AI Act mean sacrificing performance and becoming less competitive?

In reality, the opposite is true. 

When we focus on high-risk critical systems, where regulations are the most stringent, several core principles emerge as fundamental: 

 

Bias detection and transparency: key requirements for critical systems 

One of the major risks associated with AI is the generation or amplification of biases. Bias means a systematic difference in the treatment of certain objects, individuals, or groups compared to others.[1]

Biases can come from human factors (societal, ethical) or from the data or AI model itself.

Poor-quality data used to train a system will produce a model with errors and biases. If the system does not allow humans to access, understand, and correct the model, these errors persist, repeat, and worsen. Conversely, a “transparent”[2] model exposes its internal workings to humans. It is understandable and controllable. Once a bias is detected and corrected, the model can be retrained and improved. This creates a continuous learning loop, resulting in a model that becomes increasingly performant and robust. 

Transparency is also one of the AI Act’s major requirements for high-risk critical systems. It applies at various levels: transparency in how the system works, in governance, and in risk management.

Transparency is the cornerstone of trust. Today, many AI-powered solutions strive to reach this crucial goal, which remains a significant technological feat. 

Beyond statistical validation and sample testing of an opaque AI system (a “black box”), a transparent AI system undergoes formal and falsifiable validation within its operational context. This means verifying all embedded knowledge forming the system’s internal decision logic and ensuring that this knowledge is appropriate and sufficient for the system’s intended use. 

Having a system that is natively transparent is the most reliable way to ensure the level of transparency required by the AI Act and necessary to build trust in an AI-based solution. In high-risk critical systems, the concept of responsibility plays a significant role. Providing such a solution fully engages the company’s responsibility toward its client, especially when it is a decision-support tool. 

 

Regulatory compliance: a driver of competitiveness, not a barrier 

Complying with the AI Act is a legal obligation for European actors. However, delivering systems that are more performant, safer, and explainable should not be seen as a constraint. On the contrary, it positions these providers as strong competitors capable of delivering secure and trustworthy systems. 

Respecting the regulatory framework is not a barrier to innovation. Aiming for high-performance, robust, and ethical solutions by integrating transparency, traceability, and risk management requirements from the design phase allows for the development of more reliable systems that are better aligned with end-user expectations, particularly in critical applications. This not only anticipates future regulatory developments but also creates a competitive advantage based on trust and quality. 

Today, we should not have to choose between compliance and performance. 


References

[1] ISO/IEC TR 24027:2021(E) Information technology — Artificial intelligence (AI) — Bias in AI systems and AI-aided decision making

[2] In contrast to so-called “black box” systems 

Similar solutions

Discover Reasoning AI for decisions

Discover AI-based decision support technology solutions while enabling operators to focus on their expertise.