top of page
  • Timon Jaeggi

Introduction to the European AI Act – How can my organization already prepare?

Over the past year, AI-as-a-service tools such as ChatGPT have become an integral part of daily business operations in many industries, creating "enterprise-ready AI." However, a recent study by Cisco Systems found that employees are entering large amounts of sensitive data into these novel tools, raising significant security and privacy concerns [1]. Further, common AI risks related to data bias and ethical decision making increase the need for safeguards and organizational AI governance.


The AI Act

To address these challenges, the European Union (EU) has introduced the world's first AI regulation, the AI Act [2]. The regulation is currently expected to come into force in late 2024 or 2025 and will apply to AI systems that impact individuals in the EU. An AI system is a machine-based system that receives inputs from its environment to generate outputs such as predictions, content, or decisions [3]. Thus, AI systems actively interact with their environment with a certain degree of autonomy. To regulate this autonomy, the AI Act uses a risk-based approach to classify AI systems into four main categories (1-unacceptable, 2-high, 3-limited, and 4-minimal or no risk), supplemented by a fifth category (5-general purpose AI). It follows the principle that "the higher the risk, the stricter the rules" [4].


Figure 1: Overview of the EU’s risk-based approach to AI regulation


1 - Unacceptable Risk

First, unacceptable AI systems run counter to EU values and threaten fundamental rights. For example, prohibited applications include emotion recognition in the workplace, social scoring, or AI that manipulates human behavior to circumvent free will [2]. Therefore, the AI Act would deem social scoring practices conducted in China unacceptable [5] and prohibit their implementation. However, narrow exceptions such as biometric identification systems in public spaces for law enforcement are allowed under strict conditions.


2 - High-Risk

Second, the high-risk category includes AI systems with significant potential for harm to health, safety, and fundamental rights. Typical examples are critical infrastructure, medical devices, access to education, and recruitment systems that rely on AI. Therefore, these systems must ensure mandatory compliance obligations, such as establishing a risk management system throughout the AI system lifecycle (development, deployment, and monitoring). Additional data governance procedures (detailed documentation of data collection, data preparation, and potential data biases [6]) and obligations (transparency, robustness, accuracy, cybersecurity, conformity assessments, emergency procedures, fundamental rights impact assessments) help organizations manage high-risk AI systems [2].


3 - Limited-Risk

Third, the limited risk category includes chatbots, certain emotion recognition and biometric categorization systems and AI systems that generate deep fakes. These systems are subject to more transparency requirements, such as informing users that they are interacting with an AI system and labeling synthetic content. Consequently, AI-generated content should be labeled with a digital "watermark".


4 - Minimal/No-Risk

Minimal/no-risk AI systems do not fall into the above risk categories. Examples include AI-based recommendation systems or spam filters. Consequently, the EU allows the unrestricted use of these AI systems but encourages the implementation of voluntary codes of conduct. According to the European Commission, most AI systems will fall into this category [7].


5 - General Purpose AI

Due to the increasing importance of generative AI in 2023, the EU included a fifth category related to general-purpose AI (GPAI) systems, such as ChatGPT. This step is necessary because GPAI systems are integrated along entire value chains. An example is the ChatGPT API, which can be used to power individual chatbots, document analysis, or process optimization tasks. GPAI relies on foundational models that are "trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks " [2]. To deal with GPAI, the EU is adopting a two-tiered approach. As a result, all GPAI systems must comply with transparency requirements such as documentation and EU copyright law, while providing detailed summaries during training. GPAI models with systemic risk (high computing power) must comply with strict obligations (model evaluations, systemic risk mitigation, adversarial testing, incident reporting to the Commission, cybersecurity assurance, and energy efficiency reporting).


How can my organization already prepare?

A closer look at the AI Act reveals that AI governance under the AI Act will not be merely a technical task. AI governance will require a holistic approach along social and human perspectives [8]. Therefore, this article presents five actions that organizations can take today to prepare for the age of AI governance and the upcoming AI Act.


Figure 2: AI Act preparation


Action 1 - Establish AI as a strategic topic

Elevate AI as a strategic priority by securing executive buy-in. Executive support is critical to aligning organizational goals with regulatory requirements and overall business strategy. By positioning AI as a key priority, executives drive proactive compliance efforts that are essential to the long-term success of AI governance under the AI Act. Additionally, understand the strategic role of your organization along AI value chains.


Action 2 - Adapt organizational structures

Data privacy and security are not new challenges for organizations. Therefore, it is critical to establish seamless collaboration between existing departments (e.g., GDPR, compliance) and new organizational entities required by the AI Act. Thus, organizations will need to strategically realign roles and responsibilities to effectively respond to the requirements of the AI Act. This includes creating specialized roles that focus on high-risk and limited-risk AI systems.


Action 3 - Establish an AI governance framework

These specialized roles should focus on establishing an AI governance and risk management framework within the organization. Therefore, organizations should implement a dedicated risk management system and data processing procedures. This includes formulating internal policies and procedures that align with the requirements of the AI Act across the entire AI system lifecycle (development, deployment, and monitoring). The risk classification of AI systems, as well as their technical and non-technical documentation, serves as the basis for further AI governance.


Action 4 - Discover the human role

Organizations should actively discover the relationship between humans and AI systems. By analyzing the interplay between humans and machines, human strengths and responsibilities become clear. For example, machines are very good at analyzing data, while humans are good at interpreting it. Always keep the human in the loop to ensure accountability over the entire AI system lifecycle.


Action 5 - Train the workforce

To effectively manage AI systems and ensure compliance, organizations should invest in employee training programs. This includes educating employees on the fundamentals and concepts of AI, existing data privacy regulations (e.g., GDPR), and new knowledge and obligations specific to the AI Act (e.g., high-risk systems: risk management system, data processing procedures). Organizations should emphasize the importance of the AI governance framework and show employees how to implement it in their daily work.


 

Stay informed and take action! If you're eager to learn more about the AI Act and want to actively engage with its implications, reach out to AI Spaces. Contact us for additional information on the AI Act and guidance on implementing actions 1 to 5. Let's work together to navigate the evolving landscape of AI regulation and ensure a secure and transparent future. Visit our website for more details.


 

References

[2]           Gibson Dunn, “The EU Agrees on a Path Forward for the AI Act,” Gibson Dunn. Accessed: Jan. 05, 2024. [Online]. Available: https://www.gibsondunn.com/eu-agrees-on-a-path-forward-for-the-ai-act/

[3]           S. J. Russell, K. Perset, and M. Grobelnik, “Updates to the OECD’s definition of an AI system explained,” OECD.AI. Accessed: Jan. 03, 2024. [Online]. Available: https://oecd.ai/en/wonk/ai-system-definition-update

[4]           Dentons, “The New EU AI Act – the 10 key things you need to know now,” Dentons. Accessed: Jan. 03, 2024. [Online]. Available: https://www.dentons.com/en/insights/articles/2023/december/14/the-new-eu-ai-act-the-10-key-things-you-need-to-know-now

[5]           Bertelsmann Stiftung, “CHINA’S SOCIAL CREDIT SYSTEM.” 2023. [Online]. Available: https://www.bertelsmann-stiftung.de/fileadmin/files/aam/Asia-Book_A_03_China_Social_Credit_System.pdf

[6]           EU AI Act, “Art. 10 Data and Data Governance - EU AI Act.” Accessed: Jan. 09, 2024. [Online]. Available: https://www.euaiact.com/article/10

[7]           PricewaterhouseCoopers, “The Artificial Intelligence Act demystified,” PwC. Accessed: Jan. 09, 2024. [Online]. Available: https://www.pwc.ch/en/insights/regulation/ai-act-demystified.html

[8]           L. Sartori and A. Theodorou, “A sociotechnical perspective for the future of AI: narratives, inequalities, and human control,” Ethics Inf Technol, vol. 24, no. 1, p. 4, Jan. 2022, doi: 10.1007/s10676-022-09624-3.

4 views0 comments
bottom of page