Artificial Intelligence Act: latest update

Mar 21, 2024 6:41:59 PM | Data Privacy Artificial Intelligence Act: latest update

 

Approved AI Act that ensures safety and compliance with fundamental rights, while boosting innovation

On Wednesday 13 March 2024, the European Parliament approved the Artificial Intelligence Act (AI Act), the first comprehensive AI regulation in the world. (See our previous blog post on the AI Act here). The AI Act will come into force 20 days after publication in the Official Journal and will become applicable in several phases: the rules concerning prohibited AI systems will be applicable after 6 months, obligations on providers of general purpose AI models after 12 months, and the rules concerning high-risk AI systems after 24 to 36 months. In this blog post, we provide an overview of the AI Act and its implications. 

Definition of AI system

Under the AI Act, an AI system is defined as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”, which follows the definition of the Organisation for Economic Co-operation and Development (OECD) and will continue to change as the definition of the OECD changes. As a result, the AI Act keeps pace with the evolution of technology. 

Scope of Application 

The AI Act applies to public and private actors, inside and outside the European Union (EU), when the AI system is placed on the EU market or its use affects people located in the EU. This means that a company without establishment within the EU could fall under the scope of the AI Act when the output of such systems has an impact on EU citizens. 

High-risk AI systems

The AI Act uses a risk-based approach when regulating AI systems. It is important to understand what AI systems or models your organisation is using, as the obligations that apply depend on the level of risk an AI system poses. These risk levels are: unacceptable risk, high risk, limited risk and minimal risk. AI systems that pose an unacceptable risk are prohibited. Most of the obligations under the AI Act apply to high-risk AI systems. As such, this blog post will mainly cover the obligations for high-risk AI systems.

An AI system is considered to have a high-risk when it: 

  1. is intended to be used as a (safety component of a) product, covered by EU legislation specified under Annex II of the AI Act, such as personal protective equipment, lifts and civil aviation security; or 
  2. is listed under Annex III of the AI Act, such as remote biometric identification systems, AI systems used in education and employment, credit scoring and law enforcement.

Other AI systems can also be added to the list of high-risk AI systems to reflect technological developments. Many details will be further defined and specified by the European Commission in delegated acts. 

On the other hand, AI systems are not considered to be high-risk when they do not pose a significant risk of harm to the health, safety or fundamental rights of natural persons

Obligations for high-risk AI systems 

The obligations for high-risk AI systems under the AI Act are extensive. These obligations include: 

- having a risk management and quality management system in place; 
- having a data governance of systems; 
- keeping technical documentation;
- keeping records of the performance of AI systems;
- informing users about the nature and purpose of the AI systems;
- enabling human oversight and intervention;and 
- ensuring accuracy, robustness, and cybersecurity. 

Depending on your role within the value chain, there are different responsibilities and obligations. Not only the provider, but also the importer, distributor and deployer have obligations they need to comply with. Providers must perform a conformity test before placing AI systems on the market and register their system in a public EU database. Importers must ensure conformity by the verification of documents and distributors must verify the CE (Conformité Européenne) conformity. Deployers of high-risk AI systems must conduct a human rights impact assessment prior to deployment. 

Citizens will have a right to lodge complaints about AI systems with a market surveillance authority and receive explanations about decisions based on high-risk AI systems that impact their rights.

General Purpose AI

General purpose AI (GPAI) are AI models that can perform a wide range of distinct tasks,  regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (such as voice recognition, audio and video generation). GPAI are regulated separately under the AI Act. Most importantly, GPAI have to adhere to transparency requirements, such as keeping technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training their AI system.

Enforcement & Fines 

Market surveillance authorities of the Member States will be in charge of enforcing the new AI rules, investigating complaints and imposing fines. The fines for non-compliance can be very high: for violating rules concerning prohibited AI systems, fines can reach up to 35 million Euros or 7% of the global annual turnover. For violating rules concerning high-risk AI systems, fines can reach up to 15 million Euros or 3% of the global annual turnover. 

PrivacyPerfect AI (Pre-)Assessment 

As mentioned previously, different responsibilities and obligations exist for providers, deployers, importers and distributors. It is therefore important to understand who is actually responsible for complying with the AI Act. However, assessing your role in the value chain is not as simple as you might think. In view of the extensive obligations under the AI Act, it is good to consider putting AI Governance into place, whether or not the AI systems are considered high-risk.

Good news! PrivacyPerfect is adding a new feature to the assessment manager module in the Privacy Management Solution: the AI (pre-)assessment. With this assessment you can determine whether your AI systems fall under the scope of the AI Act, what level of risk the AI system has, what role you play within the value chain and also what obligations you need to comply with. 

Stay tuned for this new PrivacyPerfect feature, and sign up for our newsletter to make sure you won’t miss new updates on the AI Act. 

Until next time!

Team PrivacyPerfect