Loading...
N. Pirilides & Associates LLC

A LEADING LAW FIRM WITH A REPUTATION FOR EXCELLENCE

Offering high-quality and effective legal advice

topic
HOME  /  PUBLICATIONS  /  THE TIME TO ACT IS NOW: EU ADOPTS ARTIFICIAL INTELLIGENCE ACT

PUBLICATIONS

The time to Act is now: EU adopts Artificial Intelligence Act

Following the recent growth over the last years on technology and more specifically Artificial Intelligence (AI) systems, we now have multiple accessible AI platforms that provide users with the opportunity to use them on a daily basis from the comfort of their home or any public place. However, such rapid growth and technological advancements come with certain risks, raising global concerns. Such concerns led the passing of the EU AI Act, a legislative framework regarding the control and oversight of AI platforms.

The AI Act extends risk management beyond health and safety to consider impacts on fundamental rights, such as the protection of personal data, respect for private life, respect for ethical principles and non-discrimination.

The AI Act is the first-ever comprehensive legal framework on AI worldwide and its main goal is to eliminate the various dangers that come with such an advancement that will change modern technologies and the daily life of many. In order for the EU to regulate AI systems, the Act will adopt a risk-based approach, classifying AI systems into four different risk categories depending on their use cases. The four categories are “Unacceptable Risk”, “High Risk”, “Limited Risk” and “Low or Minimal Risk”.

  • Unacceptable Risk: AI systems that are deemed to pose a threat to people fall into the category of unacceptable risk. Such systems among others can be social scoring systems, classifying people based on their social behaviour or their personality etc., biometric categorisation systems inferring with sensitive attributes such as race, religion etc. with no permission or legal grounds to do so, or real-time remote biometric identification in public spaces. It is important to note that some “Unacceptable Risk” systems can be used under specific circumstances i.e. the remote biometric identification can be used to prevent major crimes or terrorist attacks, to assist in searches for missing persons or fugitives that impose serious threat for the public etc. To ensure that no rights are affected, police must complete a fundamental rights assessment and register the system in the EU database, unless it is a matter of high urgency, which allows the police to proceed but will need to file the assessment later with no undue delay. The companies owning such systems, 6 months after the adoption of the law in order to avoid breaches, will have to phase out such Unacceptable Risk systems.
  • High Risk: An AI system is characterised as high risk when the system  is either installed in products covered under the EU’s product safety legislation such as toys, cars, aviation and medical devices  or used in specific areas such as law enforcement, application of the law, education, private and public services, border control and more. Companies owning such AI systems must always keep high quality data, record-keeping and technical documentation, transparency, human oversight, consistency, accuracy, cybersecurity and software and hardware robustness.
  • Limited Risk: Systems regarded as limited risk are mostly everyday systems that people use and interact. Such systems are among others, AI systems that interact with people like chatbots (computer programs that simulate human conversation with an end user), emotion recognition systems and AI systems that aid in generation and transformation of image and/or audio of pictures and videos. Limited risk systems must be transparent and make sure that users know that they interact with a machine. It must be noted that even though image and video transformation is regarded as limited risk, the use of such products on social media could lead to huge misinformation and cause the panic of people, such as the early January 2024 fake picture of Eifel Tower on fire, leading to more than 4 million interactions!
  • Low or Minimal Risk: Any AI system that does not fall under any of the above-mentioned tiers is considered as low or minimal risk and is unregulated.

Following the rapid development and advancement of AI systems, the EU AI Act comes at the best time to moderate and “control” each system separately to ensure that human rights, social security and integrity will remain unaffected, while the benefits from the application of such systems will remain achievable. Through the Act, people will have the protection they asked for, erasing concerns about implications that will have negative impact upon their rights. However, it must be ensured that the Act shall remain up to date in the future to ensure that any further advancement of AI will not overlap with the controls imposed.