In our newest blog, we want to highlight for you how the EU's new regulation called the Artificial Intelligence Act (AI Act) aims to regulate AI development, deployment, and usage across various sectors, with four levels of risk for AI systems and provisions for generative AI such as ChatGPT. The act reflects the EU's dedication to ethical AI and maximizing its positive impact.
Artificial Intelligence Act
Artificial Intelligence (AI) has become a ubiquitous technology in today's world. It is rapidly improving our daily lives and transforming various industries. Besides the fact that it makes life more efficient, it also raises concerns about its potential negative impact on society, such as bias and privacy violations. This will be especially concerning if the use of AI shows a risk relating to people and their privacy. To address these concerns, the European Union (EU) proposed a new regulation called the Artificial Intelligence Act (AI Act).
The AI Act, proposed by the European Commission in April 2021, is a comprehensive regulatory framework that aims to promote the development and deployment of trustworthy AI while ensuring the protection of fundamental rights and values. The act is a first-of-its-kind legislation that regulates AI systems' development, deployment, and use across various sectors.
The AI Act will apply to providers, users, authorized representatives, importers, and distributors of AI systems. Which obligations derived from the AI Act will apply to a party, depends on the level of risk. The AI Act has a risk-based approach and defines four levels of risk in AI: unacceptable risk, high risk, limited risk, and minimal or no risk. Unacceptable risk AI systems, such as those used for social scoring and biometric identification, are banned under the act. High-risk AI systems, such as those used in critical infrastructure, healthcare, and law enforcement, are subject to strict requirements, including an adequate risk assessment, mandatory human oversight, transparency, and documentation. Limited and Minimal Risk AI systems have fewer regulatory requirements.
The AI Act also includes provisions for data quality and bias, transparency, accountability, and human oversight. AI systems must be transparent, and users must be informed when they are interacting with an AI system. Also, they must be designed to minimize potential biases and ensure data quality. Additionally, AI developers must keep records of their AI systems' activities to ensure accountability.
The EU has been working on the AI Act for several years now, however, the unexpected arrival of generative AI has been slowing down the development of the AI Act recently. Generative AI is a form of artificial intelligence that can be used to create various types of content, e.g. including text, audio, code, images, etc. This differs from normal AI, which is programmed to perform one specific task. An example of generative AI is ChatGPT.
Because of the fast and unexpected appearance of generative AI, no such thing has been taken into account while creating the rules of the AI Act over the last two years.
The 2021 proposed regulation was designed to ban specific types of AI applications with an unacceptable risk and would designate some specific uses of AI as a ‘high-risk’, binding developers to stricter requirements. The problem we’re facing concerning the AI Act with generative AI like ChatGPT, also called large language models, is that it has no single intended use. People can use it for all kinds of different reasons and it can therefore not simply be classified as high-risk. Rules on this form of AI have never been added to the AI Act yet. That's why this ‘new’ form of AI is prompting EU institutions to rewrite their plans.
The committee that deals with the AI Act has agreed to an amended version of the proposal that does deal with generative AI, like ChatGPT. The draft EU Law on Artificial Intelligence (AI Law) is currently under consideration by the EU Parliament. Recent reports suggest that lawmakers may default generative AI to the transparency category but that, depending on the purpose of usage, could also be defined as high-risk. Generative AI won’t be 'high-risk' by default, since general-purpose AI systems such as ChatGPT are hardly being used for risky activities, and instead are used mostly for drafting documents and helping with writing code.
In conclusion, the AI Act is a landmark legislation that sets a standard for AI development and deployment in the EU and beyond. While it may face challenges and require further refinements, especially in the field of generative AI, the AI Act demonstrates the EU's commitment to promoting ethical AI and ensuring that AI benefits society as a whole.
The European Commission, Council of the EU, and Parliament will hash out the details of a final AI Act in three-way negotiations. The law is expected to be completed early next year. The implementation then takes another two years.
As AI technology continues to rapidly evolve, it's more important than ever to keep up with the latest news and developments in this field. The AI Act is currently under review by The European Commission, Council of the EU, and Parliament, making it a hot topic of discussion. With media outlets closely monitoring the progress of this act, it's crucial to stay informed and up to date on its latest developments. So, we encourage all our readers to keep a close eye on this topic and to stay up to date with the latest news in the world of AI.
Until the next time!