At its meeting on 2 February 2024, the Council of Permanent Representatives (Coreper) of States unanimously voted on the AI Act, the world’s first regulation on AI, approving the political agreement reached in December. Last December, a
provisional political agreement
was reached by the co-legislators (Parliament, Council and EU Commission) on the bill. This is why the laboriousness of the matter, for various reasons related to the possibility of generating damage, since AI is a technology that is not yet entirely reliable and therefore risky for different contexts (social, economic, political and cultural) had generated several criticisms and skepticism in some EU countries precisely because of some regulatory points considered too strict. And in January, the presentation of the final version of the legislation had sparked several reservations and stiffening in France, Italy and Germany, which wanted to re-discuss the measures regarding the use of generative AI, in the face of a lighter regulatory regime than the text proposed up to that point: they therefore pushed for codes of conduct without an initial sanctioning regime for foundation models – AI systems that use countless data to provide results also adapted to specific activities (e.g. GPT4) -, rather than prescriptive obligations provided for in the Artificial Intelligence Regulation. This is to protect promising European startups (e.g. the French Mistral AI and the German Aleph Alpha) which could be potential competitors of American companies. Instead, the European Parliament has been united in calling for strict rules for these models, deeming it unacceptable to exclude the most powerful types of AI from the regulation and leaving all the regulatory burden on smaller players. There were then several phases of negotiations throughout the month that at the beginning of the week overturned the situation, leading Germany to support the text, followed by Italy, the only one in the country. One of the three countries that does not (yet) have a startup already identified as a leader in the AI sector, which has decided not to oppose it, perhaps in view of the upcoming G7 in Rome, where AI will be one of the main topics. Finally, on 2 February, France also decided to accept the text. Now all 27 member states have unanimously endorsed the December political agreement, recognising the balance struck by negotiators between innovation and security. The full text will then have to be voted on by the Council and Parliament. Its final approval is expected on April 24, 2024.
Regulatory sandbox to foster technological development
Among the measures to support SMEs and innovative startups , the current text provides for and promotes for the transitional period preceding general application, the Pact on AI, aimed at all AI developers globally, who will voluntarily commit to implementing the obligations of the legislation before its application. In addition, the so-called regulatory sandboxes set up by national authorities to develop and train innovative artificial intelligence before being placed on the market will be launched: i.e. unregulated test environments (a bit like the model anticipated by Ian Hogarth, current chair of the UK government’s AI Foundation Model Taskforce which is described in this article, ed.). This is because failure to comply with the rules set out in the text would lead to fines ranging from €35 million or 7% of global turnover, to €7.5 million or 1.5% of turnover, depending on the type of violation and the size of the company incurring it. For Alessio Butti, Undersecretary to the Prime Minister’s Office with responsibility for Technological Innovation, said: “The process that led to the approval of the AI Act was complex and required close negotiation between Member States. Italy has always stressed the need for a structured approach that provides for clear rules and penalties for violations and not simple codes of conduct. Thanks to our diplomacy and capacity for dialogue with the other Member States, in particular with France and Germany, we have managed to overcome our differences, maintaining a line consistent with the position expressed from the beginning. During the trilogue negotiations, which involved the Commission, the Council and Parliament, we worked intensively to build a consensus around a position that safeguarded the interests of security, public order and the prerogatives of law enforcement agencies, as reaffirmed on 15 December 2023 on the eve of the trilogue. This position has found full support within the government, confirming Italy’s commitment to responsible and safe AI.” (Photo by Antoine Schibler on Unsplash)
ALL RIGHTS RESERVED ©