The European Union (EU) is leading the race for artificial intelligence (AI) regulation. Earlier today, the European Council and the European Parliament concluded three days of negotiations to reach an interim agreement that will be the world’s first comprehensive agreement on AI regulation.
Spain’s state secretary for digitalization and artificial intelligence, Carme Artigas, called the agreement a “historic achievement” in a press release. Artigas said the rules strike an “extremely delicate balance” between encouraging safe and secure AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.
The draft legislation, the Artificial Intelligence Act, was first introduced by the European Commission in April 2021. Parliament and EU member states will vote to approve the draft legislation next year, but the rule won’t come into force until 2025.
Risk-Based AI Regulatory Approach
The AI Act is designed with a risk-based approach, where the higher the risk posed by an AI system, the stricter the rules. To achieve this, the regulation will classify AI to identify those that pose a “high risk”.
AIs that are considered non-threatening and low-risk will be subject to a “very light transparency obligation”. For example, such AI systems will be required to disclose that their content is AI-generated so that users can make informed decisions.
For high-risk accreditation bodies, the legislation will add a number of obligations and requirements, including:
Human oversight: The bill calls for a human-centered approach, emphasizing clear and effective human oversight mechanisms for high-risk AI systems. This means engaging humans to actively monitor and supervise the operation of AI systems. Their roles include ensuring that systems are performing as intended, identifying and addressing potential hazards or unintended consequences, and ultimately taking responsibility for their decisions and actions.
Transparency and explainability: Demystifying the inner workings of high-risk AI systems is critical to building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details about the underlying algorithm, training data, and potential biases that could affect the system’s output.
Data governance: The Artificial Intelligence Act emphasizes responsible data practices and aims to prevent discrimination, bias, and privacy violations. Developers must ensure that the data used to train and operate high-risk AI systems is accurate, complete, and representative. The principle of data minimization is critical to collect only the information necessary for the functioning of the system and to minimize the risk of misuse or destruction. In addition, individuals must have a clear right to access, correct, and delete data used in AI systems, enabling them to control their information and ensure that it is used ethically.
Risk management: Proactively identifying and mitigating risks will be a key requirement for high-risk AI. Developers must implement a robust risk management framework that systematically assesses the system for potential harm, vulnerabilities, and unintended consequences.
Certain uses of artificial intelligence are prohibited
The regulation will completely prohibit the use of certain AI systems whose risks are considered “unacceptable”. For example, the use of facial recognition AI in public areas will be prohibited, except for law enforcement use.
The regulation also prohibits AI from manipulating human behavior, using social scoring systems, or exploiting vulnerable groups. In addition, the legislation will also prohibit the use of emotion recognition systems in areas such as schools and offices, as well as the capture of images from surveillance footage and the internet.
Penalties and regulations for attracting innovation
The AI Law will also impose penalties on companies that violate the law. For example, a violation of a law prohibiting the application of AI will result in a fine of 7% of the company’s global revenue, while a company that violates obligations and requirements will be fined 3% of its global revenue.
To promote innovation, the regulation will allow innovative AI systems to be tested under real-world conditions with appropriate safeguards.
Although the European Union is already leading the way in this competition, the United States, the United Kingdom, and Japan are also trying to introduce their own AI legislation. The European Union’s AI Act can serve as a global standard for countries seeking to regulate AI.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The European Union will pass the world's first AI legislation that prohibits facial recognition in public places
The European Union (EU) is leading the race for artificial intelligence (AI) regulation. Earlier today, the European Council and the European Parliament concluded three days of negotiations to reach an interim agreement that will be the world’s first comprehensive agreement on AI regulation.
Spain’s state secretary for digitalization and artificial intelligence, Carme Artigas, called the agreement a “historic achievement” in a press release. Artigas said the rules strike an “extremely delicate balance” between encouraging safe and secure AI innovation and adoption across the EU and protecting the “fundamental rights” of citizens.
The draft legislation, the Artificial Intelligence Act, was first introduced by the European Commission in April 2021. Parliament and EU member states will vote to approve the draft legislation next year, but the rule won’t come into force until 2025.
Risk-Based AI Regulatory Approach
The AI Act is designed with a risk-based approach, where the higher the risk posed by an AI system, the stricter the rules. To achieve this, the regulation will classify AI to identify those that pose a “high risk”.
AIs that are considered non-threatening and low-risk will be subject to a “very light transparency obligation”. For example, such AI systems will be required to disclose that their content is AI-generated so that users can make informed decisions.
For high-risk accreditation bodies, the legislation will add a number of obligations and requirements, including:
Human oversight: The bill calls for a human-centered approach, emphasizing clear and effective human oversight mechanisms for high-risk AI systems. This means engaging humans to actively monitor and supervise the operation of AI systems. Their roles include ensuring that systems are performing as intended, identifying and addressing potential hazards or unintended consequences, and ultimately taking responsibility for their decisions and actions.
Transparency and explainability: Demystifying the inner workings of high-risk AI systems is critical to building trust and ensuring accountability. Developers must provide clear and accessible information about how their systems make decisions. This includes details about the underlying algorithm, training data, and potential biases that could affect the system’s output.
Data governance: The Artificial Intelligence Act emphasizes responsible data practices and aims to prevent discrimination, bias, and privacy violations. Developers must ensure that the data used to train and operate high-risk AI systems is accurate, complete, and representative. The principle of data minimization is critical to collect only the information necessary for the functioning of the system and to minimize the risk of misuse or destruction. In addition, individuals must have a clear right to access, correct, and delete data used in AI systems, enabling them to control their information and ensure that it is used ethically.
Risk management: Proactively identifying and mitigating risks will be a key requirement for high-risk AI. Developers must implement a robust risk management framework that systematically assesses the system for potential harm, vulnerabilities, and unintended consequences.
Certain uses of artificial intelligence are prohibited
The regulation will completely prohibit the use of certain AI systems whose risks are considered “unacceptable”. For example, the use of facial recognition AI in public areas will be prohibited, except for law enforcement use.
The regulation also prohibits AI from manipulating human behavior, using social scoring systems, or exploiting vulnerable groups. In addition, the legislation will also prohibit the use of emotion recognition systems in areas such as schools and offices, as well as the capture of images from surveillance footage and the internet.
Penalties and regulations for attracting innovation
The AI Law will also impose penalties on companies that violate the law. For example, a violation of a law prohibiting the application of AI will result in a fine of 7% of the company’s global revenue, while a company that violates obligations and requirements will be fined 3% of its global revenue.
To promote innovation, the regulation will allow innovative AI systems to be tested under real-world conditions with appropriate safeguards.
Although the European Union is already leading the way in this competition, the United States, the United Kingdom, and Japan are also trying to introduce their own AI legislation. The European Union’s AI Act can serve as a global standard for countries seeking to regulate AI.