The AI Act enters into force

Table of contents

The much-discussed text after its lengthy legislative process comes into force, 20 days after its publication in the European Official Journal: EU Regulation 2024/1689, better known as the AI Act.

The AI Act is the first law in the world to deal with artificial intelligence. Its legislative process has been very controversial and has certainly also had an impact on public opinion, dividing people into regulators and innovators.

The legislative process in brief

The first proposal for legislation was put forward in the European Commission in spring 2021: AI systems that can be used in different applications were analysed and classified according to their risk to users. According to the proposed legislation, AI technologies were classified into four different risk categories, from Limited risk (minimal) to Unacceptable risk (unacceptable). In this article, we elaborate on the different risks.

Then, in January, there was the presentation of the final version of the regulation, which had triggered various reservations and stiffenings in France, Italy and Germany: they wanted to rediscuss the measures concerning the use of generative AI, in the face of a lighter regulatory regime than the text proposed up to that point: they were pushing for codes of conduct without an initial sanctioning regime for foundation models – AI systems that use innumerable data to provide results that are also adapted to specific activities (e.g. GPT4) – rather than prescriptive obligations provided for in the regulation on artificial intelligence. This is to protect promising European startups (e.g. the French Mistral AI and the German Aleph Alpha), potential competitors of American companies. Instead, the European Parliament was united in calling for strict rules for these models, considering it unacceptable to exclude the most powerful types of AI from the regulation and leave all the regulatory burden on the smaller players.

There then followed several rounds of negotiations throughout the month that reversed the situation in February, leading Germany to support the text, followed later by Italy – the only country of the three that did not (yet) have a startup already identified as a leader in the AI sector-, which had decided not to oppose it, perhaps in view of the G7 that would be held a few months later in Rome and where AI would be one of the main topics. With France also accepting the text, all 27 member states unanimously approved the December political agreement.

And finally, after the plenary vote in March, the final Europarliamentary vote in April.

The effects

Six months after its entry into force, i.e. from January 2025, the blocking of prohibited systems, i.e. those considered as Unacceptable risk, will be triggered. For example, technologies that engage in cognitive behavioural manipulation of people or specific vulnerable groups (such as voice-activated toys that encourage dangerous behaviour in children), social scoring (such as classifying people according to their behaviour, socio-economic status or personal characteristics) and finally real-time and remote biometric identification systems such as facial recognition.

From April 2025, on the other hand, the codes of conduct, the AI Pact, will come into force, addressed to all AI developers globally, who will voluntarily commit to implementing the obligations of the regulation before its final implementation, 24 months after its entry into force. These codes therefore cover the commitments of the parties who wish to adhere to them in the areas of environmental and social sustainability, training and literacy, and the adoption of ethical principles in the production of technology.

From August 2025, they will instead concern Generative AI, i.e. generative AI systems, such as chatbots or image generators, which would have to comply with additional transparency requirements (e.g. Chat Gpt or Bard):

  • reveal that the content was generated by AI
  • design the model to prevent it from generating illegal content
  • publish summaries of copyrighted data used for training

From August 2026, it will also be law for all purposes for high-risk, those sectors such as healthcare, education or critical infrastructure. All high-risk AI systems will be assessed before they are placed on the market and also during their life cycle. AI systems that adversely affect security or fundamental rights will be considered high-risk and will be divided into two categories: 1) AI systems used in products that fall under EU product safety legislation: these are toys, aviation, cars, medical devices and lifts. 2) AI systems that fall under eight specific areas and will have to be registered in an EU database: – biometric identification and categorisation of natural persons – management and operation of critical infrastructure – education and vocational training – employment, management of workers and access to self-employment – access to and use of essential private and public services and benefits – law enforcement – management of migration, asylum and border control – assistance in legal interpretation and law enforcement.

From August 2027, it will cover all sectors, including limited risk, i.e. any AI system will have to meet minimum transparency requirements to enable users to make informed decisions.

Those who do not comply with the regulation will then risk fines of up to EUR 35 million or 7 per cent of global turnover. For innovative start-ups and SMEs, of course, the amount is scaled down.

Conclusions

The literature choice between regulation or innovation has been very badly posed. It is certainly a polarising reinterpretation, a victim of the current culture on very complex and delicate events and realities, and one that has divided the players into two extremes, between innovators and regulators: that is, between those who were in favour of this type of regulation and those who instead saw it as an obstacle to innovation, which must be unbureaucratised in some contexts in order to bring value: it must also be admitted that the latter had a lot of ‘meat in the fire’: to date, in fact, the AI Act is not the only regulation that concerns the innovation sector: there is a vast choice from the DMA to the DSA, up to the DATA Act. In this article we give a brief overview.

Certainly foundation models have created a new ecosystem of innovation, an alternative to certain applications of the past, producing economic and social benefits: think of automatic code generation or the design of new materials and drugs. And like any self-respecting technological innovation, sooner or later it will make a clean break with the past.

But we have already witnessed the effect of technological innovations that entered our lives overnight and changed them completely: the smartphone or social networks. And what has been the result? There has not yet been as much regulation as is envisaged with the AI Act, and yet today we have countless studies and scientific publications on the negative effects of their use, the damage of which could have been limited and the data of which could have been better collected if only there had been adequate regulations from the outset.

The other much feared risk is the economic and geopolitical one: the AI players are few (OpenAI, Google, Meta, Nvidia, Amazon, Microsoft, DeepMind), American or British, with disproportionate capital (Chatgpt’s training is $12 million with an initial investment of $800 million and a daily electricity cost of $50,000). Here such standards will influence politicians around the world, introducing standards that could affect all consumers. How to forget Elon Musk’s visit to Italy this year with his meeting with Giorgia Meloni.

On the other hand, it is obvious that states feel increasingly threatened by such disruptive technological innovation, which unbalances not only markets, but also social classes, culture and, above all, consciences, if we dwell on the implications of the Ethical Question that is facing the future scenarios of AI on humans. And for this reason, it is good that they try to regulate the use of such disruptive technologies, which can change the basis of geopolitics, feeling crushed by the digital power of companies. When it comes to regulations, we’re always talking about geopolitics.

At the moment, it could be assumed that Europe can compete in innovation, having a much smaller market than the US and lacking such disruptive technologies, through the ‘card’ of legislation.

But the EU’s move is first and foremost a demonstration of democracy, of its own defence, and that of its member states and their citizens. This seems to emerge from the key points of the AI Act: their protection and their security, what every good body, organism, institution and political figure should propose, promise and enforce, including both poles: innovators and regulators. (Photo by Alexander Psiuk on Unsplash)

ALL RIGHTS RESERVED ©

    Subscribe to the newsletter