KPMG Law LLP logo

14 March 2024

This week officially marks a shift for the recognition of artificial intelligence across the EU, as the European Parliament formally adopts the first ever legal framework on Artificial Intelligence (“AI”), the Artificial Intelligence Act (“AI Act”). David McMunn, Head of Technology & Digital Law, explains the Act and its impacts below.

The AI Act is the first-ever comprehensive legal framework on AI worldwide. The Act aims to enable trust in the rapidly advancing sector of artificial intelligence by ensuring respect of fundamental rights, safety, and ethical principles. The adaption of AI is evolving before our eyes; it is important that we continue to progress in parallel to this development.

What this means

The EU has now formally agreed and implemented a regulatory framework for the development and use of AI. Amongst the general population across the globe, the EU has generally thought to have been cautious regarding the implementation of AI. Although seen as a critical technology that is likely to have significant impact on all aspects of the EU Members States, it continues to generate fear of the unknown.

Nevertheless, EU Member States will see a notable influence on their economies, societies and culture. In this context, the adopted view is that the regulations should be adaptable, principle based, engender trust in AI and come into force in an incremental way over an extended period. As such, there is no “official date” (as was the case with GDPR), but rather a series of dates which introduce specific types of regulations and controls over a period of time.

The EU is aware of the rapid pace in which change is occurring within this sector and as such, a Code-of-Practice type model enshrining transparency, fundamental rights and practicality is best suited for the adoption of the AI Act. The Official Journal of the EU is predicted to publish the Act between May – July 2024. Twenty days thereafter, the law will enter into force and become applicable in accordance with Article 85 between late 2024 – summer 2027.

Who is affected?

The AI Act will impact both providers (or developers) and deployers (or users) of AI systems.

An AI system is “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments..”

Providers can be defined as any party “that develops an AI system or a general purpose AI model or that has an AI system or a general purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.”

A deployer is any party “using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.” In practice any commercial entity using an AI system in their activities will be deemed a deployer and must be aware of their obligations under the AI Act.

Employers will also be considered as deployers of AI and will in due course be subject to obligations to the extent they deploy AI into the workplace. These obligations will draw from and be subject to current relevant laws i.e., data protection in respect of consent requirements and obligations surrounding the processing, storage and deletion of data, and copyright law in respect of any material which is subject to copyright being used for the development of AI systems.

Further, there will be nationally based testing regimes for AI systems (the Sandboxes) in addition to codes of practice generated for the proper compliance with regulations applicable to General Purpose AI Systems. With regard to high-risk AI systems, these will be subject to further controls including Fundamental rights impact assessments. The deployment of AI in the workplace may bring about opportunities for employees to be involved in implementing AI. Employers should be considering the utilisation of resources to assist with the implementation of new technologies and how that might impact on workforce structuring.

Whilst there are a large number of industries and sectors predicted to use AI, there are a number of sectors seen particularly likely to be early adopters. Financial Services, Healthcare (particularly the pharmaceutical and medical devices/diagnostics sectors), and retail are commonly thought to be the most enthusiastic for this technology.

What happens next?

Within six months of entry into force of the AI Act:

There will be a prohibition of unacceptable risk AI systems, such as those which deploy “subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or effect of materially distorting …behaviour” (Article 5).

Within nine months of entry into force of the AI Act:

Codes of Practice for General Purpose AI (GPAI) are to be finalised. There is an emphasis on industry working with regulators on this matter. There is a need for documentation to be clear and transparent, setting out details on risk management and training. All documentation, whether aimed at staff and their interaction with AI systems or aimed at the general public or other user will need to be compressible and transparent and will need to include specific elements concerning the technical make-up of the AI systems.

Within twelve months of entry into force of the AI Act:

GPAI rules will be applied, and Member States are to have established competent authorities which will work with the EU AI Office. The AI office will undertake an annual EU review into the functioning of the AI Act generally and, in particular, will review potential prohibitions (similar to the process to be adopted within six months).

Member states are also to have a proposed administrative fines process.

Within eighteen months of entry into force of the AI Act:

The EU intends to provide guidance notes that detail and classify High Risk AI systems as set out in Article 6.

Within 24 months of entry into force of the AI Act:

Member States are also to have in place at least one national AI regulatory sandbox.

Within 36 months of entry into force of the AI Act:

The EU will ensure that there are obligations is place in respect of High-Risk AI systems in respect of biometrics, critical infrastructure, education, access to essential public services, law enforcement, immigration and administration of justice. A review and potential amendment of the list of High-Risk AI systems will also be undertaken by the EU.

KPMG Law and the AI Act

Our aim at KPMG Law and KPMG Consulting is to provide clients with an integrated multidisciplinary team assessing

As the world of AI evolves around us, our priority at KPMG Law and KPMG Consulting is to provide best in class services to our clients.

Queries? Contact our team

David McMunn

David McMunn

Director & Head of Technology & Digital Law
KPMG Law

aoife newton

Aoife Newton

Head of Employment and Immigration Law
KPMG Law

sean redmond

Sean Redmond

Director
KPMG in Ireland

Discover more in Technology & Digital Regulation