Blogpost

Artificial Intelligence, News And Events

The European Union AI Act: Empowering Innovation, Ensuring Ethics Through A Risk-Based Approach

What is AI? What is this new law?

Artificial Intelligence (AI) is a novel and emerging technology designed to perform tasks that typically require human intelligence including learning, problem-solving, reasoning, and perception. AI emulates human cognitive functions and, in some cases, can surpass human capabilities.

2023 was a breakout year for AI with high adoption rates across industry sectors and society itself. As businesses embrace AI to increase productivity, concerns around risk and governance are growing. The 2024 Shared Assessments Standardized Information Gathering Questionnaire (SIG) addresses AI Risk with an entire domain dedicated to evaluating the risk this emerging technology presents to vendors. Similarly, regulators worldwide are beginning to consider, and in some cases, implement regulations to protect against AI risk.

On Friday, December 8th, 2023, European Union policymakers reached a provisional agreement on a new law to regulate artificial intelligence (AI). This marks one of the first major attempts, worldwide, to limit the use of a rapidly evolving technology that can have societal, economic, and ethical implications across various industries.

The law, called the Artificial Intelligence Act, is intended to foster the development and consumption of safe and trustworthy AI throughout Europe. This act is twofold: it sets a new global standard for countries seeking to utilize the benefits of artificial intelligence while also protecting against potential misuses, risk, and harm to society.

Why does it matter? Who does it impact?

European policymakers and negotiators focused primarily on AI’s riskiest uses by companies and governments including law enforcement, recognition software, manipulated images, and the operation of crucial services such as energy and water.

The main elements of the EU AI Act include:

  • Classification of AI Systems and Practices: Regulation of artificial intelligence usage is based on a risk-based approach, meaning the higher and more prevalent the risk, the stronger the rules.
  • Transparency: Tech companies and AI system makers will be required to notify people when they are interacting with a chatbot or biometric/emotion recognition systems. In addition, this act requires that AI-generated content (e.g. deepfakes or other forms of media) be labelled as such. If these new rules are violated, companies could face fines of up to seven percent of global sales.
  • Human Oversight/Accountability: Human oversight will be required in creating and deploying AI systems. Developers and users of AI will be required to implement human oversight and accountability measures, which could include creating an AI Compliance Officer role, establishing escalation and incident processes, and more.

What are the implications/impacts?

 Although this new law is a regulatory breakthrough, questions remain around its implementation, effectiveness, and relevancy. Several aspects of this act will not take effect until around 2026, ample time for the development of AI systems to surpass these rules. The first version of this act was written in 2021, but policymakers soon found themselves rewriting the law as new technological advances emerged. The AI Act, however, is a significant first step in regulating the development process and is an attempt to establish a global standard surrounding artificial intelligence and its usage. This act addresses global challenges around AI and aims to promote and boost innovation while respecting the rights of European citizens.

What are our SMEs saying?

Gary Roboff, who contributes to much of Shared Assessments’ regulatory work, shares a positive view: “There has been a lot of quick AI Act reaction over the weekend. While one observer has called the act “deregulation in disguise“, the fact is that this agreement is a landmark attempt to lasso an AI development process that’s proceeding at a breakneck speed. Regulation in this space will be iterative, but putting a stake in the ground early was an important achievement.”

Charlie Miller notes the EU AI Act is a great start and points to the fine print – the fines for violations: “EU AI Rules represents a great start and continuous need for enhancements and global regulatory harmonization. Consideration of additional requirements, e.g. establish an internal organization responsible for AI governance. Also, addition fines for repeated violations are beyond the 7% of gross sales!”

Elizabeth Dunsmoor, our in-house TPRM Principal and CTPRP/CTPRA Certification instructor, views the EU AI Act as setting the precedent for more AI regulations in the future: “The EU AI Act is a long-awaited attempt toward establishing a global standard for AI regulation and will most likely encourage other jurisdictions to create similar rules. Like Privacy and ESG regulations before it, we can look to the new Act for details around how regulators may handle the risks associated with Artificial Intelligence (exploitative practices, transparency, human oversight, and penalties) in the future.”

Chris Johnson, who will lead our brand-new Healthcare Committee in 2024, lends his thoughts on the relevance of the EU AI Act: “While the AI Act is considered a significant step towards a comprehensive regulatory framework for AI, it was originally envisioned in 2021 and many of its provisions may not come into force until 2026. The question is not whether the AI Act is sufficient to meet today’s challenges, but will it remain relevant given the speed at which AI continues to evolve?” 

Johnson also reflects on the scope of the EU AI Act: “Organizations operating outside the EU should become familiar with the AI Act not only since the AI Act may serve as the blueprint for regulations in a country in which they operate but because the AI Act applies to instances where the output produced by AI systems is used in the EU, regardless of where the providers and users of AI systems are located.”

Conclusion

The Artificial Intelligence Act, proposed by the European Union, is a significant step towards fostering the development and uptake of safe and lawful AI practices which respect fundamental rights. This risk-based approach lays a foundation for AI to ensure legal certainty. This is a landmark decision in addressing evolving technology and its direct impact on societies and economies.

To participate in addressing AI Risk, consider joining Shared Assessments as a member. We host an AI and Emerging Technology Committee and a Regulatory Committee for our members. These groups are committed to addressing AI and Regulatory challenges facing Risk Management.

If you are concerned about AI Risk in your supply chain, vendors, and services, take a risk-based approach! Use the SIG to focus your assessments on evaluating AI Risk. Request a demo of our SIG’s AI scoping capabilities here.