Artificial intelligence, green light from the EU Parliament to the regulation

Among the rules, the one against facial recognition in public places and predictive policing

The European Parliament has approved in Strasbourg at the AiAct, the new regulation on Artificial Intelligence which include, among other things, clear rules against facial recognition in public places and predictive policing. There were 499 yes votes, 28 no votes and 93 abstentions. Negotiations with the Council and talks with the individual governments of the European Union will start today for the drafting of the definitive text.

The rules, explains the European Parliament, follow a risk-based approach and establish obligations for suppliers and those who implement AI systems, graduated according to the level of risk that AI can generate. AI systems with an unacceptable level of risk to people’s safety would then be banned, such as those used for social scoring (classification of people according to their social behavior or personal characteristics). MEPs expanded the list to include bans on intrusive and discriminatory uses of artificial intelligence, such as ‘live’ and ‘post’ remote biometric identification systems in publicly accessible spaces; biometric categorization systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation).

MEPs also want ban predictive policing systems (based on profile, location or past criminal behaviour); emotion recognition systems in law enforcement, border management, workplace and educational institutions; the untargeted harvesting of facial images from the Internet or CCTV (closed circuit) footage to create facial recognition databases, violating human rights and the right to privacy).

The classification of high-risk applications will now include AI systems that cause “significant harm” to people’s health, safety, fundamental rights or the environment. Artificial intelligence systems used to influence voters and the outcome of elections and in the recommendation systems used by social media platforms (with over 45 million users) have been added to the high risk list.

Core model providers (a new and rapidly evolving development in the field of AI) should assess and mitigate possible risks (to health, safety, fundamental rights, environment, democracy and the rule of law ) and register their models in the EU database before placing them on the market. Generative artificial intelligence systems based on these models, like ChatGpt, they should comply with transparency requirements (they must reveal that the content was generated by artificial intelligence, also helping to distinguish so-called deep-fake images from real ones) and ensure safeguards against the generation of illegal content.

Detailed summaries of the copyrighted data that has been used should also be made public. To boost AI innovation and support small and medium-sized enterprises, MEPs have added exemptions for research activities and AI components supplied under open source licences. The new law promotes so-called regulatory sandboxes, or real-life environments, set up by public authorities to test artificial intelligence before it is implemented.

Finally, the deputies want strengthen citizens’ right to complain about AI systems and receive explanations of decisions based on high-risk AI systems that significantly affect their fundamental rights. MEPs also reformed the role of the EU’s AI Office, which would be tasked with monitoring the implementation of the Artificial Intelligence regulation. Now interinstitutional negotiations will begin with the Council to give the law its definitive form.