A glimpse into the regulatory outlook

How can you promote a safe and trustworthy AI without impairing its ability to innovate and its competitiveness? And should you implement hard or soft regulations?

Governments are trying to address these questions at this very moment. While they are concerned by some of the risks associated with the development of AI technologies, they are equally concerned about the effect that regulations can have on their ability to innovate, their leadership on such technologies, and how it may affect the competitiveness of their companies.

The recent AI Safety Summit coinciding announcement of the US Executive Order are clearly favoring a soft approach with business code, data cards and other tools. In Europe, a similar discussion occurred about the AI Act and especially some provisions originally added by the European Parliament to rule foundation models (or LLMs). Recently, France, Italy and Germany[1]ed on a very different approach for foundations models with voluntary business conduct and other tools such as data/model cards.

Let us take a look at how these are affecting the digital landscape we are in.

  1. Artificial intelligence: From global principles to practical actions

The generalization of AI necessitates the creation of an adapted regulation to uphold ethical standards. In its AI Principles[2] published in 2019, the OECD had highlighted the potential risks of AI, and provided guidelines to safeguard human rights and democratic values. As it set up the key principles, new regulations have globally followed the same direction by promoting a trustworthy, safe and fair AI. More recently, the G7 agreed to continue to work on such principles[3] through the Hiroshima process.

In 2021, the European Commission had proposed the first comprehensive regulation on AI. It aimed to establish a common regulatory framework for AI, with a use risk-based approach. The objective was to regulate the applications of those systems, but not the technologies itself. Companies placing a high-risk AI system[4] on the market will have to notify it to an EU body and demonstrate its compliance with the AI Act requirements.

Two opposite views were colliding — on one side are those who believe the regulation will trigger innovation and on the other side, those who believe it will only slow it down or worse, completely kill it[5]. The AI Act is expected to be voted at the Parliament on 26 April 2024. The obligations will then be applicable gradually, beginning end of 2024 with the ban of prohibited AI.

On the other side of the Atlantic, in late October 2023, President Biden issued an Executive Order on safe, secure, and trustworthy Artificial Intelligence (AI). It sets the stage for improving US innovation and competitiveness, while guaranteeing security and protection of users and US citizens. The framework should be applicable from 2 years after the issuance of the executive order, so in late 2025.

Now that the regulations are just around the corner, the focus should be on practical ways to be compliant with it, i.e., by exploring the different standards[6] and certifications[7].

  1. From data silos to data lakes

Companies have deemed that data is a secret recipe that no one should access, in fear of losing their competitive advantage. In this traditional model, data has often been stored in isolated systems, limiting accessibility and hindering the flow of insights across an organization. But, as data is seen as the new oil and an essential resource in the digital world, organizations are pressing companies to free the flow of non-personal data.

The European Union tried to tackle data silos in its digital strategy by building trust and leveraging the barriers sharing non-personal data.

The Data Governance Act (applicable since September 24, 2023) provides mechanisms to establish a framework for the governance of data across the EU. It creates a network of data intermediaries that will have to be fully independent from data holders and data users in order to facilitate and encourage data sharing.

The Data Act, proposed in 2022, is still under discussion by the European Parliament and the Council of the EU. This regulation aims to promote the free flow of data and to ensure that individuals and businesses have control over their data generated from products and services they use. It will also create a great opportunity for companies to have access to a larger amount of data to develop or improve products. The Data Act was adopted in November 2023 and will become applicable on 12 September 2025.

  1. DSA and DMA: Empowering users and fostering innovation

The Digital Markets Act that promotes fair competition in digital markets by addressing the gatekeeper of large online platforms is applicable as of May 23, 2023. It modernizes the current ex-post competition law[8] by providing an ex-ante set of obligations preventing the creation of monopolistic situations. Among other obligations, gatekeepers will not be able to promote their own services. The European Commission has already announced the list of gatekeepers: Alphabet, Amazon, Apple, ByteDance, Meta and Microsoft.

As of January 1, 2024, the Digital Services Act will be applicable to all online intermediaries offering their services on the European market, such as internet service providers, cloud computing services, and online platforms. Its goal is to protect users from illegal content and harmful practices on online platforms.

From 2024, the European Commission will be responsible for implementing the DMA and DSA, investigating and sanctioning gatekeepers that fail to comply with the regulations.

Conclusion

Overall, 2024 is shaping up to be a busy year for digital regulation. The adoption and implementation of these regulations will be significant milestones in the development of the EU’s digital regulatory framework.

 


References and sources

  • [1] https://www.reuters.com/technology/germany-france-italy-reach-agreement-future-ai-regulation-2023-11-18/
  • [2] AI-Principles Overview – OECD.AI
  • [3] https://ec.europa.eu/newsroom/dae/redirection/document/99644
  • [4] Two categories are qualified as high-risk Ais: 1) Safety components of regulated products (e.g. machinery, medical devices, in vitro diagnostic medical devices), 2) Stand-alone AI systems touching on certain fundamental rights (e.g., in relation with credit scoring, law enforcement, safety components in the management and operation of road traffic,…)
  • [5] See Yann Dietrich, Regulation: innovation killer or trigger?” – Atos
  • [6] The ISO organisation is working on AI specific standards and has already published some: Artificial intelligence (AI) standards (iso.org)
  • [7] The European Commission has set up a working group that it currently developing AI certifications.
  • [8] That condemns cartels and abuse of a dominant position.