4 January 2024

The European Parliament and the European Council have reached a political agreement on the (Artificial Intelligence) AI Law proposed by the European Commission in April 2021. 

Purpose of the AI Law 

The AI Law introduces specific rules for general-purpose AI models, thus ensuring transparency along the value chain. 

In the case of more powerful models, which could pose a number of systemic risks, there will be additional binding obligations related to risk management and monitoring of serious incidents, evaluation of models and contradictory evidence. 

The new rules for Artificial Intelligence 

The new rules, which will apply directly in the same way in all Member States, follow a risk-based approach and aim at defining future-proof AI. 

  • Minimal risk: Most AI systems fall into the minimal risk category. Minimal risk applications, AI-enabled recommender systems or spam filters may benefit from a free pass and the absence of obligations. This is because these systems present little or no risk to the rights or security of European citizens. However, companies can commit, on a voluntary basis, to establish additional codes of conduct for these systems. 
  • High risk: AI systems identified as high risk will need to meet more stringent requirements, including risk mitigation systems, high quality data sets, activity risk, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Examples of high-risk AI systems include in the fields of water, gas and electricity, medical devices, systems for accessing public institutions, recruitment of personnel, law enforcement, border control, administration of justice and democratic processes. 
  • Unacceptable risk: all AI systems considered a clear threat to people’s fundamental rights will be banned, including systems that manipulate human behaviour, voice-assisted toys that incite dangerous behaviour or “social scoring” systems by governments or companies, as well as preventive policing applications. 
  • Specific transparency risk: when employing AI systems such as chatbots, users must be aware that they are interacting with a machine. Also, AI-generated content will have to be labelled as such and users will have to be informed when using biometric categorisation or emotion recognition systems. 

In addition, providers will have to design systems in such a way that synthetic content can be detected as artificially generated or manipulated. 

Fines for improper application of the IA Law 

Companies that do not comply with these regulations will be sanctioned with fines: 

  • 35 million euros or 7% of annual worldwide turnover (whichever is higher) for breaches of prohibited AI applications. 
  • 15 million euros or 3% of turnover for breaches of other obligations. 
  • For providing incorrect information, penalties of between 7.4 million euros or 1.5% of annual worldwide turnover may be imposed. 

Meanwhile, more proportionate limits are foreseen for administrative fines imposed on SMEs and start-ups for breaches of EU AI regulation. 

Governance of the AI Law 

National market surveillance authorities will oversee the implementation of the new rules at national level. Also, a new European AI Office will be created within the European Commission to ensure coordination at European level, being the first body worldwide to enforce the binding rules set out by the law. 

Next steps for the AI Act 

For this political agreement to enter into force, it is subject to formal approval by the European Parliament and the Council, entering into force 20 days after its publication in the Official Journal. 

With FI Group as your trusted partner, you can navigate your way through the changing landscape of R&D funding and make the most of the incentives available. 

Teresa Marina 

× How can I help you?