kids encyclopedia robot

Artificial Intelligence Act facts for kids

Kids Encyclopedia Facts
Quick facts for kids
Regulation
European Union regulation
Title Regulation ... laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
Made by European Union
History
Implementation date 21 May 2024; 4 months ago (21 May 2024)
Preparative texts
Commission proposal 2021/206

The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI).

It establishes a common regulatory and legal framework for AI in the European Union (EU). Proposed by the European Commission on 21 April 2021, and then passed in the European Parliament on 13 March 2024, it was approved by the Council of the European Union on 21 May 2024. The Act creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU, if they have users within the EU.

It covers all types of AI in a broad range of sectors; exceptions include AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of AI systems and entities using AI in a professional context. The draft Act was revised following the rise in popularity of generative AI systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. More restrictive regulations are planned for powerful generative AI systems with systemic impact.

The Act classifies AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. Applications with unacceptable risks are banned. High-risk applications must comply with security, transparency and quality obligations and undergo conformity assessments. Limited-risk applications only have transparency obligations and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations when there are high risks.

La Quadrature du Net (LQDN) stated that the adopted version of the AI Act would be ineffective, arguing that the role of self-regulation and exemptions in the act rendered it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".

Provisions

Risk categories

There are different risk categories depending on the type of application, and one specifically dedicated to general-purpose generative AI:

  • Unacceptable risk: AI applications that fall under this category are banned. This includes AI applications that manipulate human behaviour, those that use real-time remote biometric identification (including facial recognition) in public spaces, and those used for social scoring (ranking people based on their personal characteristics, socio-economic status or behaviour).
  • High-risk: the AI applications that pose significant threats to health, safety, or the fundamental rights of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases a Fundamental Rights Impact Assessment is required. They must be evaluated before they are placed on the market, as well as during their life cycle. The list of high-risk applications can be expanded without requiring to modify the AI Act itself.
  • General-purpose AI (GPAI): this category was added in 2023, and includes in particular foundation models like ChatGPT. They are subject to transparency requirements. High-impact general-purpose AI systems which could pose systemic risks (notably those trained using a computation capability of more than 1025 FLOPS) must also undergo a thorough evaluation process.
  • Limited risk: these systems are subject to transparency obligations aimed at informing users that they are interacting with an artificial intelligence system and allowing them to exercise their choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound or videos (like deepfakes). In this category, free and open-source models whose parameters are publicly available are not regulated, with some exceptions.
  • Minimal risk: this includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to be in this category. They are not regulated, and Member States are prevented from further regulating them via maximum harmonisation. Existing national laws related to the design or use of such systems are disapplied. However, a voluntary code of conduct is suggested.

Exemptions

Articles 2.3 and 2.6 exempt AI systems used for military or national security purposes or pure scientific research and development from the AI Act.

Article 5.2 bans algorithmic videosurveillance only if it is conducted in real time. Exceptions allowing real-time algorithmic videosurveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack".

Recital 31 of the act allows social scoring systems similar to the Chinese Social Credit System, provided that they are "lawful evaluation practices ... carried out for a specific purpose". La Quadrature du Net interprets this exemption to allow for sector specific social scoring systems such as the suspicion score used by the French family payments agency Caisse d'allocations familiales [fr].

Institutional governance

The AI Act, per the European Parliament Legislative Resolution of 13 March 2024, includes the establishment of various new institutions in Article 64 and the following articles. These institutions are tasked with implementing and enforcing the AI Act. The approach is characterized by a multidimensional combination of centralized and decentralized, as well as public and private enforcement aspects, due to the interaction of various institutions and actors at both EU and national levels.

The following new institutions will be established:

  1. AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of GPAI providers.
  2. European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice.
  3. Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process.
  4. Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for GPAI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings.

While the establishment of new institutions is planned at the EU level, Member States will have to designate "national competent authorities". These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance". They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.

Enforcement

The Act regulates the entry to the EU internal market using the New Legislative Framework. The AI Act contains the most important provisions that all AI systems that want access to the EU internal market will have to comply with. These requirements are called "essential requirements". Under the New Legislative Framework, these essential requirements are passed on to European Standardisation Organisations who draw up technical standards that further specify the essential requirements.

The Act requires that member states set up their own notifying bodies. Conformity assessments should take place in order to check whether AI-systems indeed conform to the standards as set out in the AI-Act. This conformity assessment is either done by self-assessment, which means that the provider of the AI-system checks for conformity themselves, or this is done through third party conformity assessment which means that the notifying body will carry out the assessment. Notifying bodies do retain the possibility to carry out audits to check whether conformity assessment is carried out properly.

There has been criticism that many high-risk AI-systems do not require third party conformity assessment. These critiques are based on the fact that high-risk AI-systems should be assessed by an independent third party to fully secure its safety. Concerns have also been raised by legal scholars surrounding the issue of whether deepfakes used to spread political misinformation or create non-consensual intimate imagery should be considered high-risk AI systems, potentially leading to stricter regulation.

Legislative procedure

In February 2020, the European Commission published "White Paper on Artificial Intelligence – A European approach to excellence and trust". In October 2020, debates between EU leaders took place in the European Council. On 21 April 2021, the AI Act was officially proposed by the Commission. On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the European Parliament. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement.

The law was passed in the European Parliament with an overwhelming majority on 13 March 2024, and was approved by the EU Council on 21 May 2024. It will come into force 20 days after being published in the Official Journal at the end of the legislative term in May. After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else.

See also

Kids robot.svg In Spanish: Ley de Inteligencia Artificial para niños

  • Algorithmic bias
  • Ethics of artificial intelligence
  • Regulation of algorithms
  • Regulation of artificial intelligence in the European Union
kids search engine
Artificial Intelligence Act Facts for Kids. Kiddle Encyclopedia.