Artificial Intelligence Act facts for kids
European Union regulation | |
![]() |
|
Title | Regulation ... laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) |
---|---|
Made by | European Union |
History | |
Implementation date | 21 May 2024 |
Preparative texts | |
Commission proposal | 2021/206 |
The Artificial Intelligence Act (often called the AI Act) is a new set of rules from the European Union (EU) about artificial intelligence (AI).
It creates a common set of rules for how AI can be used in the EU. The idea for this law came from the European Commission in April 2021. It was then passed by the European Parliament in March 2024 and approved by the Council of the European Union in May 2024.
This Act also sets up a special group called the European Artificial Intelligence Board. Its job is to help countries work together and make sure everyone follows the new rules. Just like other EU laws, the AI Act can affect companies outside the EU if they have users inside the EU.
The Act covers many different types of AI used in various areas, like health, education, and even games. However, it does not cover AI used only for military, national security, research, or personal hobbies. This law focuses on the companies that create AI systems and businesses that use AI.
When popular AI systems like ChatGPT started to appear, the rules were updated. These "general-purpose AI" systems can do many things, so special rules were added for them, especially for very powerful ones that could have a big impact.
Contents
What the AI Act Does
The AI Act sorts AI applications into different groups based on how much risk they might cause. There are four main risk levels: unacceptable, high, limited, and minimal. There's also a special group for general-purpose AI.
Risk Levels for AI
- Unacceptable Risk: These AI systems are completely banned because they are too dangerous.
- This includes AI that tries to trick people into doing things they wouldn't normally do.
- It also bans real-time facial recognition in public places, which identifies people instantly.
- AI that "socially scores" people (ranking them based on personal traits or behavior) is also banned.
- High-Risk: These are AI systems that could cause serious harm to people's health, safety, or basic rights.
- Examples include AI used in healthcare, education, hiring, managing important services (like power grids), law enforcement, or justice.
- These systems must follow strict rules for safety, transparency (being open about how they work), and quality. They also need human oversight.
- They must be checked carefully before they are used and throughout their lifetime to make sure they are safe and fair.
- General-Purpose AI (GPAI): This group was added in 2023 and includes powerful AI models like foundation models (the base for systems like ChatGPT).
- These systems must be transparent, meaning users should know they are interacting with AI.
- Very powerful GPAI systems that could cause big problems (like those trained with huge amounts of computing power) must go through extra checks.
- Limited Risk: These AI systems have some rules about transparency.
- They must tell users that they are interacting with an AI system. This helps users make informed choices.
- An example is AI that creates or changes images, sounds, or videos, like deepfakes.
- However, many free and open-source AI models in this group are not regulated, with a few exceptions.
- Minimal Risk: Most AI applications fall into this group.
- Examples include AI used in video games or for filtering spam emails.
- These systems are not regulated by the AI Act. Countries are not allowed to create their own strict rules for them.
- However, companies are encouraged to follow a voluntary code of conduct for these systems.
What the AI Act Doesn't Cover
The AI Act does not apply to all AI systems. For example:
- AI systems used only for military or national security purposes.
- AI used for pure scientific research and development.
There are also some specific exceptions, like certain uses of real-time video surveillance by law enforcement in very serious situations, such as preventing a terrorist attack.
Who Manages the AI Act?
To make sure the AI Act works well, several new groups have been created:
- AI Office: This group is part of the European Commission. It helps coordinate how the AI Act is put into action across all EU countries. It also checks that general-purpose AI providers follow the rules.
- European Artificial Intelligence Board: This board has one representative from each EU country. It advises the Commission and countries on how to apply the AI Act consistently and effectively. They share knowledge and give recommendations.
- Advisory Forum: This group gives advice and technical help to the Board and the Commission. It includes people from different areas like industry, small businesses, civil society groups, and universities, making sure many different views are heard.
- Scientific Panel of Independent Experts: This panel gives expert technical advice to the AI Office and national authorities. They also help make sure the AI Act's rules are up-to-date with the latest scientific discoveries.
Besides these EU-level groups, each EU country will also have its own "national authorities." These authorities are responsible for making sure the AI Act is followed in their country and for checking AI systems in the market. They verify that AI systems meet the rules and can even appoint independent groups to do checks.
How the Rules are Enforced
The AI Act sets out the main rules that all AI systems must follow to be sold or used in the EU. These are called "essential requirements." European groups that set standards then create more detailed technical rules based on these requirements.
Countries also need to set up "notifying bodies." These are groups that check if AI systems meet the standards set by the AI Act. This check is called a "conformity assessment."
- Sometimes, the company that made the AI system can check it themselves (self-assessment).
- Other times, an independent "notifying body" will do the check (third-party assessment).
Notifying bodies can also do audits to make sure the checks are done correctly.
Some people have raised concerns that many high-risk AI systems don't always require an independent third-party check. They believe that independent checks are important to fully ensure the safety of these systems.
How the Law Was Made
The journey of the AI Act started in February 2020 when the European Commission published a "White Paper" about AI, discussing how Europe should approach it.
- In October 2020, EU leaders discussed the ideas.
- On April 21, 2021, the AI Act was officially proposed by the Commission.
- In December 2022, the European Council agreed on a general direction, allowing talks to begin with the European Parliament.
- After long discussions, the EU Council and Parliament reached an agreement on December 9, 2023.
The law was passed by the European Parliament with a large majority on March 13, 2024, and approved by the EU Council on May 21, 2024. It will become official 20 days after it is published in the Official Journal.
Even after it becomes official, there will be a delay before all parts of the law apply. This delay depends on the type of AI system:
- Bans on "unacceptable risk" AI systems will apply in 6 months.
- Rules for general-purpose AI systems will apply in 12 months.
- Some rules for "high-risk" AI systems will apply in 36 months.
- All other rules will apply in 24 months.
See Also
In Spanish: Ley de Inteligencia Artificial para niños
- Algorithmic bias
- Ethics of artificial intelligence
- Regulation of algorithms
- Regulation of artificial intelligence in the European Union