Background
It is nearly impossible to find a tech article or topic that does not have Artificial Intelligence (AI) stealing the spotlight. And for a good reason, considering that AI stands as one of the most transformative technologies in recent decades, exerting a profound impact on nearly every aspect of our lives.
Recognizing the need to regulate Artificial Intelligence (AI) while fostering innovation, the European Union (EU) has embarked on a pioneering legislative journey on April 21, 2021, with the publication of the Commission’s regulatory proposal, dubbed the EU AI Act. Following several rounds of negotiations and exchanges of numerous compromise drafts between the EU legislative bodies, on May 11, 2023, the EU Parliament’s Internal Market and Civil Liberties committees published the latest amended draft, which was adopted by an overwhelming majority on June 14, 2023, by a plenary vote in the EU Parliament[1].
Through this legislative intervention, the EU seeks to position itself as a global leader in AI governance, setting high standards that could influence global practices and encourage responsible AI development worldwide.
Key principles
The EU AI Act is structured upon a set of fundamental principles that govern its entire application:
•Risk-based approach. Recognizing the paramount importance of safeguarding EU citizens and their rights, lawmakers have tried to create a risk-based approach stipulating different restrictions depending on the probability of risk. This approach recognizes that certain AI applications pose greater potential risks to individuals and society, § warranting more stringent regulations and oversight. Through the creation of the “risk scale” some Al uses have been outright prohibited while others, considered High-Risk AI systems (HRAIS), are subject to onerous compliance requirements.
•Technological neutrality. The AI Act is designed to be flexible and adaptable to various AI technologies and systems and to remain relevant and effective as AI technology evolves, without being overly prescriptive or hindering innovation.
•Horizontal application. This principle ensures that the rules and requirements set forth in the Act are applicable across different sectors and industries, promoting harmonization and consistent governance of AI technology throughout the European Union.
•Transparency and explainability. The Act emphasizes the importance of transparency in AI systems. It seeks to ensure that individuals have a clear understanding of how AI systems make decisions that affect them and are aware that they communicate or interact with an AI system. This principle encourages the documentation and disclosure of AI algorithms, enabling individuals to assess and challenge automated decisions made by AI systems.
•Accountability. The EU AI Act promotes accountability and oversight mechanisms for AI systems. It calls for clear responsibilities and obligations for AI developers and operators ensuring they can be held accountable for the actions and consequences of AI systems.
•Extraterritorial reach. Τhe EU AI Act may apply to AI systems and activities that occur outside the EU but have an impact on individuals or entities within the EU. This enables the EU to assert regulatory authority over AI systems that interact with EU residents or have consequences within EU member states, even if the AI systems or organizations operating them are located outside the EU.
Ethics
In the Parliament’s proposal, explicit reference is made to six general principles applicable to all AI systems which serve as the guiding pillars, ensuring a harmonized and ethically grounded approach towards the deployment and regulation of artificial intelligence within the European Union[1], namely i. human agency and oversight, ii. technical robustness and safety, iii. privacy and data governance, iv. transparency v. diversity, non-discrimination and fairness and vi. social and environmental well-being.
Depending on the nature of the AI systems, the general principles are complied with by operators of AI systems by means of the requirements set out in the respective sections of the EU AI Act.
Contentious provisions
Unsurprisingly, throughout the negotiating process, significant scrutiny and critique was generated from various stakeholders. In turn, this has resulted in some “controversial” sections being amended multiple times, until the draft was adopted by the Parliament’s Plenary. Below we shed light on the kea areas of contention of the EU AI Act:
•Definition of AI. The definition of AI within the Act was a subject of intense debate. Some critics argued that the definition was too broad and could potentially encompass non-AI technologies, leading to confusion and unintended consequences. The current definition adopted in the latest draft has revised the wording to align with the stricter OECD definition.
More specifically, it now reads: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” Moreover, lawmakers have anticipated future definition changes by international organisations, such as OECD, by stating in the preamble that the notions in the Act must remain closely aligned with such changes in the future.
•Prohibited AI practices: The Act proposed a list of prohibited AI practices, including subliminal manipulation, exploitation of vulnerable groups, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces and social scoring (as defined in the Parliament’s position). However, there was extensive debate surrounding the precise criteria and thresholds employed to ascertain which practices should be prohibited.
The Parliament made significant revisions to Article 5 of the EU AI Act, on prohibited AI practices. On one hand, new prohibitions were introduced, such as the use of AI systems for predicting the occurrence of criminal offenses based on profiling of natural persons, or for inferring the emotions of individuals, as well as the creation of facial recognition databases. On the other hand, the Parliament dismissed exemptions related to the deployment of “real-time” remote biometric identification systems in publicly accessible areas.
•High-Risk AI systems. The EU AI Act introduced regulatory measures for High-Risk AI systems (HRAIS), such as those used in biometric identification and categorisation of natural persons, in critical infrastructures, healthcare, law enforcement and administration of justice and democratic processes.
Controversy ensued regarding the scope of the definition of HRAIS and whether it encompassed all pertinent domains as well as potential ramifications on innovation and competitiveness. In the latest proposal, an AI system referred to in Annex III of the EU AI Act shall be considered HRAIS if it is deemed to pose a significant risk to harm people’s health (or to the environment as the case may be), safety, or fundamental rights of natural persons.
•Obligations across the supply chain. Another point of contention has been the impact of this new legal framework on various parties of the supply chain. In its latest version, lawmakers have decided to increase the obligations of all parties: providers, importers, distributors and deployers. Most notably, deployers, previously called users, now face the requirement, to conduct a “fundamental rights impact assessment”, prior to putting a high-risk AI system into use, and must provide specific information to affected persons subject to a decision by a “HRAIS”.
•Non-binding model contractual terms. To facilitate negotiations and promote clarity and fairness in contractual relationships between providers and third parties that supply tools and services that are used or integrated in HRAIS, the Parliament proposed the creation by the Commission of non-binding model contractual terms consistent with each party’s level of control.
•SMEs and startups. The Parliament has shown special consideration for SMEs and startups, putting forth a proposal that aims to render unfair any contractual terms, unilaterally imposed by an enterprise, regarding the supply of tools, services, components, or processes used or integrated in an HRAIS. The legislators provided guidelines as to the notions of unfairness and unilateral terms. It is worth noting that, according to the proposal, the relevant article shall apply not only to new contracts entered into force after the date of entry into force of the EU AI Act but also to existing contracts, giving businesses three years after the EU AI Act is enacted to review and adjust the relevant terms.
•The rights of affected persons. Following heavy criticism, more focus has been placed on affected persons with the latest amended version granting them rights and processes for complaints and redress. According to the Parliament’s proposal, apart from the new requirement to be informed if they are adversely affected by a HRAIS, affected persons are granted the right to an explanation of decision-making from deployers of HRAIS. Furthermore, they can file complaints with national supervisory bodies and seek recourse to judicial remedy if complaints to said bodies go unresolved, while there is also the potential for representative actions[1].
•Generative foundation models. For the first time, the Parliament’s negotiation position explicitly references generative AI models, highlighting the necessity to monitor and assess the legislative and governance framework surrounding such models, as they raise important concerns regarding the generation of content, that may violate Union law, copyright regulations, and could potentially be misused. The Parliament has adopted the stance that generative foundation models[2] must guarantee transparency in acknowledging that the content is generated by an AI system rather than humans. Also, AI models will need to provide full disclosure of text that has been AI-generated including a detailed summary of training data used and covered by copyright law.
•Heftier fines. In an effort to ensure compliance with the Act, regulators have decided to increase the severity of financial penalties reaching EUR 40 million or 7% of global annual turnover for the most serious breaches (previously EUR 30 million or 6%, respectively).
Next steps
Since the Act was adopted by the EU parliament, it has entered the last stage of the EU legislative process. Thus, the Council and parliamentarians will start negotiating together with the Commission to decide on the final text of the legislation, in the so-called trilogue. Finally, following the eventual adoption and publication of the final regulation, if the two-year transition period provided for in the current version of the Act remains (instead of three years set out in the previous draft), the Act could come into force as early as 2026. Nevertheless, organizations should start contemplating the potential impact of the EU AI Act on their business from now on.
1 The European Parliament adopted its negotiating position on the AI Act, with 499 votes in favour, 28 against, and 93 abstentions.
2 Newly introduced Article 4a of the EU AI Act
3 It is worth noting that, although liability is not explicitly addressed in the AI Act, a newly proposed AI Liability Directive aims to provide clarity on the issue of civil liability for damages caused by AI systems in the absence of a contract.
4 It is worth noting that, although liability is not explicitly addressed in the AI Act, a newly proposed AI Liability Directive aims to provide clarity on the issue of civil liability for damages caused by AI systems in the absence of a contract.Foundation model IS defined as an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.
© Logaras Law (2023). All contents on this website, including logos, trademarks, texts, newsletters and articles (hereinafter the “Contents”), are protected under intellectual property law. Except where otherwise stated, use, downloading, reproduction and distribution in whatever form and by whatever medium (including Internet) for whole or part of the Contents available on this website and newsletter is not authorized.