Artificial Intelligence Act European Parliament AI

European Parliament: In a landmark moment the European Parliament on 13-03-2024 adopted the Artificial Intelligence Act, thereby pioneering a legislative framework in regulation of AI (Artificial Intelligence) and aiming to ensure safety, compliance with fundamental rights, as well as boost innovation1.

The law with the objective to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, was passed with 523 votes in favour, 46 against and 49 abstentions.

This law is a result of citizens’ proposals from the Conference on the Future of Europe (COFE), on enhancing EU’s competitiveness in strategic sectors, a safe and trustworthy society, including countering disinformation and ensuring humans are ultimately in control, proposals on promoting digital innovation, while ensuring human oversight and trustworthy and responsible use of AI.

The Act will now be subjected to a final lawyer-linguist check and is expected to be finally adopted once it is formally endorsed by the Council2.

Salient Features of the Legislation

  • The adopted text of the 459-page Act states that, “This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI”.

  • The law does not affect the obligations of providers and deployers of AI systems in their role as data controllers or processors stemming from Union or national law on the protection of personal data in so far as the design, the development or the use of AI systems involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual decision-making, including profiling.

‘Biometric identification’-

  • Should be defined as the automated recognition of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics, for the purpose of establishing an individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in a reference database, irrespective of whether the individual has given its consent or not.

  • Biometric Identification excludes AI systems intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having security access to premises.

‘Biometric categorisation’

  • Should be defined as assigning natural persons to specific categories on the basis of their biometric data. Such specific categories can relate to aspects such as sex, age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority, sexual or political orientation.

  • Biometric categorisation does not include Biometric Categorisation Systems that are a purely ancillary feature intrinsically linked to another commercial service meaning that the feature cannot, for objective technical reasons, be used without the principal service and the integration of that feature or functionality is not a means to circumvent the applicability of the rules of this Regulation. For example, filters categorising facial, or body features used on online marketplaces could constitute such an ancillary feature as they can be used only in relation to the principal service which consists in selling a product by allowing the consumer to preview the display of the product on him or herself and help the consumer to make a purchase decision. Filters used on online social network services which categorise facial or body features to allow users to add or modify pictures or videos could also be considered to be ancillary feature as such filter cannot be used without the principal service of the social network services consisting in the sharing of content online.

Emotion Recognition System-

Should be defined as an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or fatigue.

High Risk AI Systems- Instances of

  • Remote Biometric Identification System Should be classified as high-risk in view of the risks that they pose. Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. The risk of such biased results and discriminatory effects are particularly relevant with regard to age, ethnicity, race, sex or disabilities. Such classification excludes AI systems intended to be used for biometric verification, including authentication, the sole purpose of which is to confirm that a specific natural person who he or she claims to be and to confirm the identity of a natural person for the sole purpose of having access to a service, unlocking a device or having secure access to premises.

  • AI systems used to evaluate the credit score, or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services.

  • AI systems used in education or vocational training etc., should be classified as high-risk AI systems because if they are improperly designed and used, such systems may be particularly intrusive and may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation.

  • AI systems specifically intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences.

General Purpose AI System:

General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.

AI Regulatory Sandboxes-

Member States should ensure that their national competent authorities establish at least one AI regulatory sandboxes at national level to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.


1. Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament (europa.eu)

2. supra

Must Watch

maintenance to second wife

bail in false pretext of marriage

right to procreate of convict

Criminology, Penology and Victimology book release

Join the discussion

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.