Thank You

You are now registered for our Rouse Insights Newsletter

The AI Act – What Businesses need to know

Published on 17 Apr 2024 | 2 minute read

On 13 March 2024, the European Parliament approved the world’s first comprehensive regulation for the development and use of artificial intelligence – the Artificial Intelligence Act (the “AI Act”). The AI Act serves to, on the one hand, ensure safety and compliance with fundamental rights, while on the other, remain sustain and encourage innovation by using AI in the EU.

The AI Act establishes a harmonised legal framework for providing reliable AI systems and safe use systems within the EU, providing restrictions for how and when AI can be used. For example, the new rules ban certain AI applications that threaten citizen’s rights, and clear obligations are foreseen for other high-risk AI systems, due to significant potential harm to health, safety, fundamental rights. Even though the AI Act is an EU based regulation, it is a realistic possibility that it will have a ripple effect globally, similar to the EU’s General Data Protection Regulation (“GDPR”), by raising awareness of the responsible and ethical use of AI technology, regardless of industry. The AI Act also affects the relationship to intellectual property regulations, product liability and AI systems’ handling of personal data.

The AI Act applies to vendors of AI systems regardless of whether they are established or located within the EU or in a third country, as well as importers, distributors, product manufacturers and authorized representatives. By offering a framework for the responsible development of AI technology, all types of businesses can leverage from it to build trustworthy AI systems and tools. Therefore, it will be of great importance for companies and organisations before they start development or implementation to ensure that they comply with the regulatory obligations.

Additionally, the AI Act provides a risk-classification system, sorted into four different categories: unacceptable, high-risk, limited and minimal. Therefore, it is crucial for companies and organisations to be aware of these tiers as, depending on the risk-category, certain AI rules will apply. For example, anything at the level of “unacceptable” risk is effectively banned. This risk-level refers to AI systems being a clear threat to the safety, livelihoods and rights of people, such as biometric surveillance tools. Looking at “low risk” systems, this level refers to AI-enabled video games and spam filters. For a company to identify the risk-levels of implemented AI systems, a first step is to do an inventory of what AI systems are being used, followed by an assessment per AI system to evaluate the risk-level at hand.  

One of the cornerstones of the AI Act is transparency, meaning that companies and organisations using AI in their day-to-day businesses, are mandated to disclose training data used to develop AI models and systems, such as being able to predict trends. This means that the AI Act compels disclosure to regulators of the data used to train the AI being used. In addition, fabricated or manipulated content such as videos, images or audios – so called deepfakes – must be clearly labelled as such.

Similar to when the GDPR was first implemented in 2018, we are now facing a new regulatory era which is likely to have the same high level of impact on businesses regardless of industry type. Companies and organisations must thoroughly understand the AI tools they have implemented as well as how the systems are being used, developed and distributed in order to comply with the AI Act.

Please get in contact with me if you want to discuss how Rouse can help your business prepare for the AI Act.

30% Complete
Associate, Legal Counsel
+46 076 0107192

TAGS

Associate, Legal Counsel
+46 076 0107192