EU AI Act: what does it mean for your business?

The EU Artificial Intelligence Act (EU AI Act), designed to regulate and govern evolving AI technologies, will play a crucial role in addressing the opportunities and risks for AI within the EU and beyond.

Over just a few years, AI-powered technology, such as ChatGPT, Bard, Siri, and Alexa, have made artificial intelligence (AI) become deeply ingrained in our society. This rapidly evolving technology brings significant opportunities across various sectors, including healthcare, education, business, industry, and entertainment.

However, growing concerns revolve around the potential risks of unregulated AI on fundamental rights and freedoms. These concerns have become especially pertinent considering the upcoming elections in prominent Member States and the European Parliament.

Now that the EU institutions have come to a political deal on the EU AI Act on 9 December 2023, it is worth exploring the content and implications of this legislation.

 

Table of contents

1. What is the EU AI Act?

2. AI on the global scene

3. The EU AI Act and the EU GDPR

4. The objectives of the EU AI Act

5. The classification of the AI systems

6. The EU AI Act’s governance structure

7. The positions of the Council and the Parliament

8. The latest updates on the EU AI Act

9. Implications for your business

 

What is the EU AI Act

At the forefront of the digital frontier, the EU’s paramount concern is to ensure that AI systems within its borders are employed safely and responsibly, warding off potential harm across various domains.

That’s precisely why, in April 2021, The European Commission introduced a pioneering piece of legislation – the AI Act – marking the first-ever regulatory framework for AI in Europe. This step is just one element of a broader AI package that additionally includes a Coordinated Plan on AI and a Communication on fostering a European approach to AI.

The EU AI Act is the first attempt to create a legislative framework for AI aimed at addressing the risks and problems linked to this technology by, among other things:

  • Enhancing governance and enforcement of existing laws on fundamental rights and safety requirements to ensure its ethical and responsible development, deployment, and use.
  • Introducing clear requirements and obligations for high-risk AI systems, users and providers.
  • Facilitating a Single Market for AI to prevent market fragmentation and lessen administrative and financial burdens for businesses.

The end goal is to make the European Union a global, trustworthy environment in which AI can thrive.

 

AI on the global scene

This regulatory process on AI fits into broader global developments. On 9 October, officials of the G7 under the Hiroshima Artificial Intelligence Process drafted 11 principles on the use and creation of advanced AI. These principles aim to promote the safety and trustworthiness of AI, such as generative AI (GAI) like ChatGPT. GAI are AI systems and models which can autonomously create novel content, such as text, images, videos, speech, music and data.

Moreover, they  created an international voluntary Code of Conduct for organisations on the most advanced uses of AI.

Moving to the United States (US), there have been contentions on intellectual property (copyright) rights and AI, which will possibly influence AI legislative discussions in the EU. The US issued an executive order on the safe use of AI, also targeting these issues.

Also, the United Nations, the United Kingdom, China and the other G20 countries are all actively engaged in shaping policies and agreements around AI.

 

The EU AI Act and the EU GDPR: what is the difference?

The  EU AI Act is a separate legislation from the EU General Data Protection Regulation (GDPR). The GDPR, which became effective on 25 May 2018, standardised data privacy rules across the EU, in particular regarding personal data. In contrast, the EU AI Act establishes a new legislative framework specifically for AI technologies, focusing on their safety and ethical use. While the GDPR includes obligations for data controllers and processors, the EU AI Act is centred on providers and users of AI systems.

However, the GDPR rules that when AI systems process personal data, they must adhere to the prevailing GDPR principles. This means that businesses employing AI in the EU must comply with GDPR requirements when handling personal data. Did we tickle your curiosity?  Then, keep reading as in this blogpost, Publyon EU will provide you with key insights on:

  • The objectives of the AI legislation
  • The relevant implications for businesses
  • The latest updates in the EU legislative process

 

What are the objectives of the EU AI Act?

The EU AI Act aims to provide a technology-neutral definition of AI systems and establish harmonised horizontal regulation applicable to AI systems. This regulation is structured around a “risk-based” approach, classifying AI systems into four distinct categories.

These categories are defined as “unacceptable risk”, “high-risk”, “limited risk”, and “minimal risk”. Each classified AI system is subject to specific requirements and obligations.

Risk categories: the classification of the AI systems according to the EU AI Act

Unacceptable risk AI systems

AI systems that are labelled as an unacceptable risk will be prohibited under the EU AI Act, for example in the case of violating fundamental rights. They comprise systems which use:

  • AI-based social scoring
  • ‘Real-time’ remote biometric identification systems in public spaces for law enforcement,with certain exemptions
  • Subliminal techniques to disrupt people’s behaviour to cause harm
  • Biometric categorisation based on sensitive characteristics, with exemptions
  • Individual predictive policing
  • Untargeted scraping of internet or CCTV for facial images
  • Emotion recognition in education institutions and the workplace, with exemptions

These rules are designed to address practices like the social credit system used in certain parts of the world.

 

High-risk AI systems

High-risk AI systems are those that pose a high risk to the health, safety, and fundamental rights of citizens. These are not only dependent on the function, but also on the specific purpose and modalities for which the AI system is used for. These systems will be subject to strict obligations, including mandatory requirements, human oversight provided by the provider, and conformity assessments. High-risk AI falls under two systems:

  • Those that fall under the EU’s product safety legislation (including toys, cars, medical devices e.g.)
  • Those that are used in biometric identification, categorisation of people and emotion recognition (medical or safety reasons), critical infrastructure, educational and vocational training, safety components of products, employment, essential private and public services and benefits, evaluation and classification of emergency calls, law enforcement, border control, and administration of justice and democratic processes.

 

Limited risk AI systems

Some AI systems must comply with specific transparency requirements when there are clear risks of manipulation to allow users to make informed decisions when interacting with AI, such as with chatbots. Moreover, the EU AI Act addresses systemic risks that come with general-purpose AI models, such as large generative AI models (ChatGPT, Gemini, e.g.). These systems can be used for a variety of tasks and are becoming increasingly foundational in EU AI systems. As these powerful systems pose systemic risks, they need to be regulated by safeguards such as human oversight, transparency, and accountability.

 

Minimal risk

The unrestricted development and use of minimal-risk AI is subject to existing legislation and permitted without additional obligations, which comprises most AI systems currently used within the EU. These systems encompass applications like AI-enabled video games and spam filters.

The regulation encourages having a Code of Conduct (CoC) regarding the ethical use of AI for all providers of non-high-risk AI. The objective is to encourage providers of non-high-risk AI systems to voluntarily adopt obligatory standards for high-risk AI systems. Providers have the freedom to establish and enforce their own CoCs, which can, for example, comprise voluntary commitments related to environmental sustainability, accessibility for persons with disability, and involving stakeholders in the AI design and development process.

 

The EU AI Act’s governance structure

The AI Act also establishes a complex governance structure, comprising of the EU AI Office, the European Artificial Intelligence Board (AI Board), an Advisory Forum and a Scientific Panel of independent experts.  

The AI Office has been set up within the Commission and started its formal operations on 21 February. The Office will be funded via the Digital Europe Programme. Its tasks include:

  • Overseeing the implementation of the Act, in particular monitoring and enforcing rules regarding general-purpose AI (GPAI) models, and develop codes of practices, guidelines and evaluation tools;
  • Facilitating the development of trustworthy AI in the EU;
  • Promoting international cooperation in AI, for example through international AI agreements;
  • Supporting and coordinating secondary legislation and supporting tools to the AI Act;
  • Acting as a Secretariat to the AI Board and provide administrative support;
  • Creating fora for cooperation and consultation of providers of (GP)AI models and systems, as well as for the open-source community, regarding best practices, codes of conduct and codes of practice;
  • Overseeing the AI Pact, allowing businesses to engage with the Commission and other stakeholders by sharing best practices and joining activities to foster early implementation of the Act.

Additionally, the AI Board will be created to enhance oversight at the Union level. The AI Board will consist of representatives from the Member States. Its main responsibilities will include ensuring the EU AI Act’s effective and harmonised implementation through advisory tasks (issuing opinions, recommendations, and advice) and guiding on matters related to the implementation and enforcement of the Act. In addition, the AI Board will provide guidance and expertise to the Commission, Member States and national competent authorities on questions related to AI.

The Advisory Forum, comprising stakeholders such as SMEs and startups, civil society, academia, industry and European standardisation bodies, would be established to advise and give technical expertise to the AI Board and the Commission. With this, stakeholders can provide input to the implementation of the AI Act.

The Scientific Panel, consisting of independent experts, will be in charge of supporting the implementation and enforcement of the Act, in particular the monitoring activities of the AI Office concerning GPAI. Member States will also be able to request support from the panel in their enforcement activities.

Lastly, the European Commission launched an “AI Innovation Package” with measures to support European startups and SMEs in developing AI that respects EU values and rules, on 24 January. Initiatives include giving AI startups access to supercomputers to train their models, improving access to data, and increasing investments in generative AI with around €4 billion.

 

The positions of the Council and the Parliament

Council of the EU

The Council adopted its position on the EU AI Act in December 2022. They:

  • Narrowed down the definition of AI systems to “systems developed through machine learning approaches and logic- and knowledge-based approaches”;
  • Extended the prohibition on using AI for social scoring to private actors;
  • Imposed requirements on general-purpose AI systems;
  • Simplified the compliance framework;
  • Strengthened the role of the European AI Board.

 

European Parliament

On 14 June 2023, the Parliament adopted its position on the EU AI Act. Most notably, the Members of the European Parliament extended the list of prohibited AI systems, added requirements for high-risk AI systems and amended the definition of AI systems to align with that of the OECD.

 

Prohibited AI systems

  • Fully ban the use of biometric identification systems in the EU for real-time and ex-post use. Exceptions are used in the case of severe crime and pre-judicial authorisations for ex-post use;
  • Included biometric categorisation systems using sensitive characteristics;
  • Also included predictive policing systems emotion recognition systems, and AI systems using indiscriminate scraping of biometric data from CCTV footage to create facial recognition databases.

 

High-risk AI systems

  • Added the requirement that systems must pose significant harm to citizens’ health, safety, fundamental rights, or the environment to be considered high-risk;
  • Included systems used to influence voters and the outcomes of elections on the list.

Additionally, the European Parliament wants to facilitate innovation and support for SMEs which might be troubled with the barrage of new rules. Therefore, the Parliament:

  • Added exceptions for research activities and components of AI that are provided under open-source licences and promote the use of regulatory sandboxes;
  • Wants to set up an AI complaints system for citizens and establish an AI Office for the oversight and enforcement of foundation models and, more broadly, general-purpose AI (GPAI);
  • Strengthened national authorities’ competencies.

Lastly, the Parliament also introduced the concept of foundation models in the text and set a layered regulation on transparency requirements, safeguards, and copyright issues.

Foundation models are large machine-learning models, which are pre-trained on large amounts of data sets to learn language patterns, facts and reasoning abilities and generate output based on specific stimuli. GPAI refers to AI systems which can be adapted to a wide range of applications. GPAI, GAI and foundation models are often used interchangeably as the terms overlap and the terminology is still evolving.

 

The latest updates on the EU AI Act

Three-way negotiations

When both the Council and the Parliament  adopted their position, the interinstitutional negotiations (trilogues) started. The earlier trilogues mostly discussed innovation, sandboxes, real-world testing, fundamental rights impact assessments, and  the classification of high-risk AI systems.

Later, the legislators moved towards clearing the requirements and obligations for providers of high-risk AI. A new ‘filter approach’ was introduced for the classification of high-risk AI, which would allow exceptions for AI developers to avoid high-risk classification when they would perform “purely accessory” tasks and let them decide whether their AI systems would fall under it.

A criterion on decision-making patterns that AI systems should not be intended to substitute for or impact a previous human evaluation without “proper human review” was also added to the filter approach. However, AI systems that use profiling would always be considered high-risk.

 

A tiered approach

Additionally, the institutions then started shaping a ”tiered approach’ for GPAI, foundation models and “high impact foundation models”. This approach would introduce tighter regulation, with the most powerful ones having a higher potential for systemic risks, such as ChatGPT, and are therefore subject to even stronger market obligations.

An AI office would provide centralised oversight. However, the discussions regarding foundation models and GAI remained a politically charged topic, with a temporary stalemate as countries such as France, Italy and Germany were pushing back, fearing overregulation could kill companies and EU competition.

The Council and Parliament also remained at odds regarding the prohibitions of AI systems and exemptions for law enforcement and national security. In this context, the European Parliament circulated a compromise text scrapping the complete ban on real-time remote biometric identification (RBI) in exchange for concessions on other parts of the file such as a longer list of banned AI applications.

 

Political deal reached: the 3-day negotiations

The last trilogue took place on 6 December. After 36 hours of negotiations, the EU policymakers finally came to a political agreement on 8 December 2023.

The most contentious topic of the final negotiations was the banned AI applications. The political agreement extends the list of prohibited applications, such as those that employ social scoring, biometric categorisation based on sensitive characteristics, predictive policing, and emotion recognition in the workplace and education.

The Parliament wanted a full-on ban on facial recognition, which was opposed by the Council. A compromise was thus found by agreeing on safeguards. Law enforcement can only use it with very strict oversights. Real-time remote biometric identification can only be used in the case of targeted search of victims, abduction, trafficking, the sexual exploitation of human beings, missing persons; prevention of threat to the life or physical safety of people or the threat of a terrorist attack; or 16 other pre-determined crimes.

Next to that, a whole set of requirements was decided for high-risk AI, specifically in areas such as education, critical infrastructure and law enforcement. However, some exemptions for law enforcement and national security were included.  Deployers of high-risk AI must also conduct a fundamental rights impact assessment before an AI system can be put on the market.

Additionally, the AI Act introduces rules for GPAI and foundation models to ensure transparency. Stricter measures are in place for “high-impact” foundation models with advanced capabilities that can pose future systemic risks. Codes of practice were also introduced, which can be used by providers of GPAI to show compliance with the obligations of the EU AI Act, like that of the GDPR.

Regarding definition, the EU AI Act’s definition is in line with the OECD. Moreover, the legislation should not interfere with Member States’ national security competencies. The rules do also not apply to AI systems for exclusive military or defence purposes, as well as research and innovation.

Moreover, a new governance framework was created, including the creation of the European AI Office, which will operate within the European Commission to supervise GPAI and a scientific panel of independent experts that will advise the AI Office on GPAI models. An advisory forum for stakeholders will deliver technical expertise to the earlier-mentioned AI Board. Companies will have to pay fines for non-compliance, although SMEs and startups could be spared from such high fines and receive only administrative fines.

Lastly, the EU AI Act provides space for regulatory sandboxes and real-world testing to test innovative technologies for compliance with the EU AI Act, considering compliance burdens for SMEs. Real-world testing can be conducted for a maximum of six months, with an extension for six more months if necessary.

The final version of the AI Act, proposed by the Belgian Presidency, was unanimously adopted by the Council of the EU on 2 February 2024, despite some initial reservations expressed by France, Italy and Germany. Only France attached “strict conditions” to its support of the text. In the European Parliament, the Parliamentary Committee on the Internal Market (IMCO) and on Civil Liberties (LIBE) approved the text as well, on 13 February 2024.

 

What happens next?

The final vote will take place in the EP during the plenary session of 13 March. A corrigendum will be voted during the second plenary session of 22-25 April, after which the file will receive a final endorsement at the ministerial level before it can be published in the Official Journal and enter into force twenty days later, expected in the end of May;

The EU AI Act will become applicable 2 years (24 months) after it enters into force, with some exceptions. The AI bans go into effect after just six months, and the rules regarding GPAI governance go into effect after twelve months. Obligations for high-risk AI systems falling under Annex II (the list of Union harmonisation legislation) will start applying after 36 months.

Given that the AI Act has approximately 20 acts of secondary legislation, EU countries and industry stakeholders can have significant influence over its implementation down the road. Moreover, the three other supervision and enforcement bodies (the AI Board, the Advisory Forum, and Scientific Panel) will be set up after the Act’s entry into force.

 

The EU AI Act: what are the implications for your business?

The EU AI Act will apply to all the parties involved in the development, deployment, and use of AI. It is therefore applicable across various sectors and will also apply to those outside of the EU if the products are intended to be used in the EU.

 

Risks and challenges

Member States will enforce the regulation and ensure businesses are compliant. Moreover, penalties may be given for non-compliance concerning prohibited AI systems up to €35,000,000 or 7% of the total annual turnover of a company. Non-compliance with other requirements, including GPAI, have fines up to €15,000,000 or 3% of the total annual turnover.

Lastly, giving out incorrect, incomplete or misleading information will lead to penalties of up to €7,500,000 or 1.5% of the turnover. However, the regulation has lower thresholds for small-scale providers, start-ups, medium-sized enterprises (SMEs) and their economic viability.

Businesses or public administrations that develop or use AI systems that fall under high-risk, would have to comply with specific requirements and obligations. The risk assessment indicates that compliance with the new rules would cost approximately €6,000 to €7,000 companies for an average high-risk AI system worth €170,000 by 2025. Moreover, additional costs may come for human oversight and verification.

However, businesses that develop or use AI systems that are not high-risk would merely have minimal information obligations.

 

Opportunities for businesses

The EU AI Act will establish a framework that ensures trust in AI for customers and citizens. Additionally, it aims to promote investments and innovations in AI, for example, by providing regulatory sandboxes. These sandboxes establish a controlled environment to test innovative technologies for a limited time under oversight. Special measures are also in place to reduce regulatory burden and to support SMEs and start-ups.

Moreover, initiatives such as Networks of AI Excellence Centres, the Public-Private Partnership on Artificial Intelligence, Data and Robotics, Digital Innovation Hubs and Testing and Experimentation Facilities can support companies in the EU in developing AI. Next to that, providers of (GP)AI models and providers and the open-source community can participate in fora created by the AI Office to share best practices and provide input for codes of conduct and practice. Lastly, the AI Pact enables opportunities for businesses to help with (early) compliance with the upcoming Act.

 

Concrete actions to prepare for the AI Act

Businesses are advised to:

  • Consider whether they use AI systems or models and asses the risks associated with it;
  • Consider the different obligations and requirements set for AI systems of different risk levels;
  • Consider drafting a Code of Conduct and Codes of Practices for GPAI.
  • Consider how to improve AI proficiency by brushing up on AI-related knowledge and competences.
  • Actively or reactively engage with the European Commission to adhere to the AI Pact.
  • Actively participate in the relevant fora on AI depending on your business and expertise
  • Actively engage with your HQ’s governments and permanent representations in Brussels to monitor and engage through the AI Board.

 

Learn more about our EU AI-related services

Publyon offers tailor-made solutions to navigate the evolving policy environment at the EU level and anticipate the impact of the EU AI-related legislation on your organisation.

Would you like to know more about how your organisation can make the most out of the AI legislation in the EU? Make sure you do not miss the latest developments by subscribing to our EU Digital Policy Updates.

If you’re intrigued by the EU’s latest efforts to regulate Artificial Intelligence, you can use the contact form below to reach out to us.

Do you want to know more?

Do you need help getting a better understanding of what the EU AI Act will mean for your organization?

Fill out the form below and our team of experts will get in touch with you.

    * required field