teensexonline.com
Tuesday, October 8, 2024
HomeBusinessWho Must Comply With The AI Act, And What Are The Obligations?...

Who Must Comply With The AI Act, And What Are The Obligations? – Prowess


The European Artificial Intelligence Act (AI Act) was officially enacted On 1st August 2024, carving a pivotal moment into the history books of AI regulations.

We still have a way to go before the Act comes into full effect from 2 August 2026, but some provisions are already being brought into place at earlier dates; For example, prohibitions on unacceptable risk AI systems will apply from 2 February 2025.

With these changes rolling out over the coming months, it’s important to understand what’s happening when, who must comply with the AI Act, and the key requirements organisations may need to address to ensure compliance. In this article, a team of experts specialising in AI compliance and DPO as a service have brought together the highlight points around compliance with the new Act.

Who needs to comply with the EU AI Act?

Similar to the General Data Protection Regulation (GDPR), the AI Act has extra-territorial reach. This makes it a significant law with global implications that can apply to any organisation marketing, deploying or using an AI system in the EU, even if the system is developed or operated outside the EU.

This approach aims to ensure consistent standards across the EU, while also ensuring the fundamental rights of EU residents are respected, regardless of international boundaries.

What are the key obligations?

Obligations for organisations are determined by two main factors: The risk level of the AI system and the organisation’s role in the supply chain.

Concerning risk levels, the AI Act categorises AI systems based on their potential risk, dividing them into three main levels. As a brief overview, these levels are:

  • Prohibited systems, which pose unacceptable risks to health, safety, or human rights. These are banned entirely.

  • High-risk systems, which can significantly impact people’s safety, wellbeing, or rights, are permitted, but must adhere to strict regulatory requirements. These systems often relate to sectors like medical devices, employment, and law enforcement.

  • Low-risk systems have minimal danger and fewer compliance obligations, offering more flexibility for businesses.

You can read more around risk categorisation in Annex 1 of the AI Act.

Defining your organisation’s role

Under the AI Act, organisations fall into one of six distinct roles, each with its own set of obligations.

Provider

An individual or organisation that designs and develops an AI system and makes it available for use in the EU. Providers are responsible for ensuring the system meets the necessary requirements of the AI Act.

Deployer

Individuals or organisations that use an AI system developed by a Provider. The responsibilities of Deployers under the AI Act are minimal if the AI system is used without any changes.

However, if a Deployer modifies the system significantly or uses it under their own name or trademark, they then take on the Provider’s responsibilities, meaning they must ensure the AI system meets the relevant regulations and standards, just as the original Provider would.

Distributor

An individual or organisation in the supply chain (other than a Provider or Importer) who makes an AI system available on the EU Market.

Importer

Any natural or legal individuals based in the EU who bring an AI system into the EU market that carries the name or trademark of a company or individual from outside the EU.

Product Manufacturer

Individuals or organisations that introduce or put into service an AI system on the EU market as an integral part of another product and brands it with their own name or trademark.

Authorised Representative

An individual or organisation based in the EU who has been formally appointed by a Provider located outside the EU. The Representative takes on the responsibility of managing and fulfilling the regulatory obligations and documentation required by the AI Act on behalf of the Provider.

Providers and Deployers

The obligations of the AI Act mostly apply to those in the Provider role. This is good news for the majority of organisations, who would be considered Deployers. For example, a company using ChatGPT to support internal processes typically falls under the Deployer category, meaning their primary responsibility is to ensure they use the AI system in compliance with existing guidelines and data protection obligations, rather than dealing with the greater obligations imposed on Providers.

That said, Deployers still have some responsibilities, and organisations will need to assess whether any customisation or modification of an AI system might shift their role into the territory of a Provider.

There are some requirements that must be met by both Providers and Deployers of AI systems:

AI literacy: All staff and agents using AI systems must have the appropriate level of AI literacy. This requirement is dependent on their roles and the associated risks, similar to the requirement for mandatory data protection training under the GDPR.

Transparency: Any AI system interacting with individuals (termed a ‘natural person’) must meet transparency obligations, such as clearly marking content that is generated or manipulated by AI.

Registration: The AI system must be registered in the EU’s database. The process is similar to data protection registration with a supervisory authority.

Obligations For Providers

Providers are responsible for designing, developing, and bringing an AI system to market. They control the creation and operation of these systems, so they have a key role in ensuring their system meets the required standards for safety, effectiveness, and ethical considerations.

Transparency and accountability are core principles of the AI Act. Providers must ensure their AI systems are easy to understand, and must clearly communicate the system’s functionalities, limitations, and potential risks. This helps users know exactly what to expect and how to use the AI system safely and effectively.

These are essential compliance obligations for Providers (with some also impacting Deployers):

Establish a risk management system: Implement a process to regularly review the AI system, identifying, evaluating and mitigating any risks

Implement effective data governance: Develop clear procedures for managing training data, including ensuring diversity and establishing protocols for data handling and protection

Prepare technical documentation: Create detailed and accessible documentation about the AI system’s design, functionality, and performance to facilitate user understanding before it reaches the market

Maintain event logs: Set up automatic logging systems to track the AI system’s operations and any issues that may arise

Create usage documentation for Deployers: Provide Deployers with clear guides on how to use the AI system (Deployers must also maintain documentation relevant to their use of the system, should it differ)

Establish human oversight: Design the AI system to allow for human intervention and monitoring (Deployers must also ensure that the AI systems they use are designed to allow human intervention)

Ensure accuracy and robustness: Confirm that the AI system is reliable and resilient in its operations, and suitable for its intended purpose

Implement cybersecurity measures: integrate strong cybersecurity practices to protect the AI system from potential threats

Maintain a quality management system: Establish a quality management system to oversee the ongoing development of the AI system

Address issues and conformity: Quickly address any issues with the AI system and withdraw any systems that do not meet compliance standards (Deployers must also ensure AI systems comply with established standards and address any problems promptly)

Complete documentation and assessments: Ensure all required documentation and conformity assessments are accurately completed, and retain for at least 10 years

Appoint a Representative: If needed, appoint an individual or entity to support compliance obligations and act as the point of contact between the Provider and regulatory authorities, particularly relevant when the Provider is based outside the regulatory jurisdiction

Cooperate with supervisory authorities: Be ready to liaise with regulatory bodies, providing requested information and assisting with inspections or audits to demonstrate compliance

Impose responsibilities on importers and distributors: Ensure all parties in the AI supply chain are aware of and adhere to their compliance standards, including completion of conformity assessments

Summary

With the AI Act, we are seeing the first global standards being set for the responsible development and deployment of artificial intelligence systems.

As with many new regulations, concerns and debates have sparked among various stakeholders, including industry associations, tech companies, and legal professionals. Their concerns echo the initial criticisms that surrounded the introduction of the General Data Protection Regulation (GDPR). Namely, the potential difficulties in interpreting and implementing the Act’s provisions.

Despite its complexity, the AI Act, much like the GDPR, has a structured approach that makes implementation more manageable. There are clear definitions for the six roles in the AI supply chain, and each role comes with specific compliance obligations. With the AI Act coming into full effect in August 2026, it is essential for organisations to familiarise themselves with the different roles, the various compliance obligations and how and when they will apply.



Source link

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments

Verified by MonsterInsights