As AI continues to help streamline our daily lives and business interactions, it is also posing daunting challenges, such as data privacy leaks, violation of intellectual property rights, and creating fake news. In an effort to grapple with the dark underbelly of AI, industry and country leaders are proposing and implementing new regulations with stricter compliance checks.

2024 proved to be a tremendously busy year for players in the digital regulation ecosystem, and 2025 seems ready to usher in the adoption and implementation of new rules and regulations. However, many players are unsure of how this will affect their business strategy and decisions.

Yann Dietrich leads the intellectual property, patents, and innovation team at Atos Eviden, which also handles legal questions surrounding AI. With more than 25 years of experience as Chief IP Counsel for major international corporations, he is deeply involved in international initiatives on AI.

As the first requirements of the AI Act take effect in Europe in February 2025, we spoke with him to discuss the challenges and trends in AI regulation.

Where do we stand?

How can companies prepare themselves for this?

And what best practices do Eviden and Atos follow in this area?

Here are his insightful responses with practical guidance to ensure your business is compliant with the AI Act.

 

The AI Act is coming into force in Europe.

Where do international regulatory initiatives on AI currently stand?

We are witnessing a wave of regulatory initiatives across the globe, like the AI Act in Europe, the AI Executive Order in the US, Rules on GenAI in China, the AI Regulation Bill in the UK, to name a few. These regulations are largely built on the same core principles, although recent political developments, such as the influence of figures like Donald Trump or Elon Musk, have signaled a shift in the US approach and the AI Executive Order has been rescinded already in the early days of Trump presidency.

Overall, the emergence of these regulations is a positive development.

While AI has led to immensely beneficial scientific advancements — like reducing the time required to develop new medications — and while most data scientists are attuned to ethical considerations, there remain bad actors and risks of misuse that need to be mitigated.

That said, we must also avoid getting swept up in unfounded fears. I recall an international meeting where most participants advocated for mandatory “kill switches” simply because Netflix had recently aired the Terminator series. Let’s be clear: SkyNet is science fiction! Such fears can lead some stakeholders to overregulate AI, forgetting that many existing regulations already cover potential risks associated with AI.

Nonetheless, national and international bodies are launching numerous initiatives. Europe is at the forefront with its AI Act, but other countries are expected to introduce their legislation in the next 12–24 months.

As always, striking the right balance is crucial. Innovation and regulation should not be pitted against each other. We must avoid both extremes: laissez-faire and overregulation. Another key challenge is ensuring international coherence to prevent a proliferation of competing standards, which would be highly counterproductive, especially for European companies.

How should companies prepare for compliance?

Are there pitfalls to avoid?

The European AI Act, which came into effect on August 1, 2024, provides a clear roadmap: prohibited AI applications must be phased out by February 2025, and high-risk AI applications must achieve compliance by August 2026. This roadmap demands immediate action. Compliance is not just a regulatory requirement but also a societal expectation to ensure AI is used safely and responsibly.

The scientific community itself was surprised by the scale and speed of advancements in generative AI. This led to widespread concern, exemplified by the open letter signed in March 2023 by hundreds of experts, including technology entrepreneur Steve Wozniak and publicist Yuval Noah Harari, calling for reflection on potential risks and the implementation of safety measures.

For businesses, addressing these expectations around safety and transparency is non-negotiable. In Europe, this means implementing rigorous AI governance to avoid prohibited applications (a rare but critical concern) and ensuring that high-risk AI applications are developed responsibly.

Interestingly, the high-risk category encompasses a wide range of use cases, some of which might seem trivial. Beyond well-known examples like dark patterns in content algorithms, any application involving significant human impact could fall under this category.

Consider tenant screening: AI must rely on appropriate data to select the best candidates while avoiding discriminatory biases, such as those linked to names or addresses. This requires careful scrutiny of data usage and decision-making processes.

Companies must act now, as deadlines are approaching quickly — February 2025 for prohibited AI and just 18 months for high-risk AI. Establishing and validating governance systems in such a short timeframe is challenging, but it is both a regulatory obligation and a means of earning public trust.

 

What best practices do Eviden and Atos follow in this area?

We are working on this from two perspectives:

On the one hand, we are supporting our clients in identifying and implementing regulatory compliance, safety, and responsible AI measures. We have launched a dedicated advisory service for this purpose.

On the other hand, we are practicing what we preach by applying responsible AI principles to the applications we develop, both for our clients and within our own solutions. This includes sector-specific offerings (finance, industry, energy, healthcare, public services) and operational solutions (e.g., security).

We have established a comprehensive AI governance framework encompassing both data and algorithms. Our approach includes the tools and platforms needed to perform the technical tests required under the AI Act. We aim to achieve compliance by design by integrating the required tests and documentation into the development process, including the automated generation of regulatory documents for authorities like the EU AI Office.

This is more than just ticking a compliance box — it’s about embedding governance throughout the design, development, and deployment phases. A key principle of our approach is ensuring humans remain central to decision-making. For high-risk use cases, AI should always support, but never replace, human judgment. For example, a triage application in healthcare must assist medical professionals but leave the final decision to doctors or nurses. This underscores a unique challenge with AI: unlike most products, whose compliance is validated independently, an algorithm’s risk level depends heavily on its specific use case.

This is why compliance in AI requires a precise analysis of each use case and its applicable regulations. For instance, we are global leaders in airport video surveillance, detecting abandoned luggage, suspicious behavior, and preventing incidents. Regulatory requirements vary widely between countries: some allow facial recognition and predictive risk analysis, while others do not. Adapting software to meet local regulations is essential.

In emerging regulatory areas, collaboration with authorities is sometimes necessary. For example, we navigated these complexities during the Paris 2024 Olympics.

Lastly, there are still international conventions and state-specific laws in areas like military and national security, where the AI Act doesn’t apply.

The EU AI Act is both a constraint and a valuable framework. It helps companies ensure their AI applications — critical for productivity — are ethical, safe, responsible, and compliant with the law. This principle lies at the heart of Eviden’s approach.