EU to have the world’s first comprehensive law on Artificial Intelligence by end of 2023

On 14 June 2023, the European Parliament adopted its negotiating position on the AI Act, the first regulation on Artificial Intelligence.

Artificial Intelligence (AI) is the ability of a machine to reproduce human-like capabilities such as reasoning, learning, organising, and creativity. AI systems receive data from their environment, process them, and respond accordingly. They are also able to develop, adapt, and eventually work autonomously by learning from their previous actions (European Parliament 2020).

Although AI systems have been used for dozens of years, over the last years AI technologies have advanced significantly. Consequently, the European Union started working on a regulatory framework to ensure a safe yet innovative development of AI systems. By end of 2023, there should be a final agreement on the new law.

Road to EU regulatory framework for Artificial Intelligence

In October 2020, the European Parliament laid the basis for the EU regulatory framework on AI, presenting recommendations on the first set of EU rules for AI. In April 2021, the European Commission published the Proposal for a Regulation laying down harmonised rules on artificial intelligence. The Commission presented a whole package on AI, which together with the Proposal, included also its Communication on fostering a European approach to AI and a review of the Coordinated Plan on Artificial Intelligence with EU Member States.

In May 2023, the Internal Market Committee and the Civil Liberties Committee adopted a draft on this first set of rules. Afterwards, the European Parliament adopted its position in June 2023. The next steps are the talks between European Parliament and EU countries in the Council to define the final text of the law.

AI Act risk-based approach: what is allowed and what banned

The Proposal classifies AI-related risks into four different levels: unacceptable risk, high risk, limited risk, and minimal risk.

  • Unacceptable risk includes threats such as cognitive behavioural manipulation of people or specific vulnerable groups, social scoring (e.g., classifying people based on behaviour, socio-economic status, or personal characteristics), and biometric identification systems (e.g., facial recognition).
    • Such AI products will not be allowed on the EU market.
  • High risk relates to AI systems that negatively affect safety or fundamental rights. This group is divided into two categories. The first includes AI systems used with products that fall under the EU’s product safety legislation (toys, aviation, cars, medical devices, and lifts). The second category includes AI systems associated to the use in the following areas: law enforcement, migration, education, employment, biometric identification, critical infrastructure management, essential private services and public services, and law application.
    • Third party conformity assessment bodies will need to assess the AI before it can enter the EU market.
  • Limited risk is associated with AI systems that interact with humans, such as chatbots, emotion recognition systems, and biometric categorisation systems.
    • Users have to be informed and aware that they are interacting with an AI system. Therefore, such systems will have to comply with transparency requirements.
  • Minimal risk: such AI technologies can be developed without any additional legal obligations. However, providers should respect a code of conduct, whose creation is included to the AI Act.

Generative AI, such as Chat GPT, will have to comply with transparency requirements, which will include that the systems have to disclose that the content was generated by an AI system, be designed to avoid creating illegal content, and publish summaries of copyrighted data used for training.

AI technologies and medical devices

Medical devices using AI technologies are rapidly expanding. Medical devices can include software that process input and output data and process, analysis or create medical information with a medical purpose.

As per current version of the AI Act, medical devices with AI embedded or AI system medical devices are “high-risk” devices and therefore, require the intervention of a third-party conformity assessment, the notify body. Whether IVD medical device or medical device, non-EU providers must appoint an authorised representative where the import cannot be identified as such.

For medical devices software (MDSW), Obelis will:

  • Acts as your authorised representative
  • Verify that the documentation is compliant with the applicable legislation
  • Search a notified body for you
  • Help you compile the documentation
  • Keep you up to date about any changes that impact your devices

As soon as the new AI Regulation will come into effect, manufacturers of medical devices using AI technologies will have to comply with the new legislation as well.

Contact us today to know more about our services!

Simona Varrella

Publications Department

14/06/2023

References:

European Parliament (2023) EU AI Act: first regulation on artificial intelligence. Retrieved on 14/06/2023.

European Parliament (2023) AI Act: a step closer to the first rules on Artificial Intelligence. Retrieved on 14/06/2023.

European Parliament (2023) AI rules: what the European Parliament wants. Retrieved on 08/06/2023.

European Parliament (2022) The future of AI: the Parliament’s roadmap for the EU. Retrieved on 08/06/2023.

 


Get in touch


The information contained on obelis.net is presented for general information purposes only, without obligation and it has been compiled with the utmost care to ensure it remains up to date. Nevertheless, Obelis Group cannot be held liable for the accuracy and completeness of the information published. Any reliance placed on such information is therefore strictly at the User’s risk.

Share This

Copy Link to Clipboard

Copy