Press

Press releases

All Press releases

EU rules on AI lack punch to sufficiently protect consumers

Published on 09.12.2023

About this publication

The EU institutions have agreed on the world’s first legislation to specifically regulate artificial intelligence, but the result is underwhelming given the breadth of risks consumers will be improperly protected from in the future.

On the positive side, the EU’s AI Act bans social scoring, which is too demeaning and discriminatory to consumers. The AI Act also provides consumers with certain rights, such as being able to lodge a complaint with a public authority against an AI system, or to seek collective redress if an AI system causes mass harm.

However, the legislation comes with several gaps and flaws from a consumer perspective. Also, AI systems that can identify and analyse consumers’ feelings (emotion recognition systems) will still be allowed, which is very worrying given how invasive and inaccurate they are. The AI Act also focuses heavily on high-risk AI systems, leaving too many AI systems such as AI-embedded toys or virtual assistants basically unregulated. In addition, the underlying models behind systems like Chat-GPT, which can be integrated into a broad range of services, will not be sufficiently regulated. For example, there is no obligation to audit such models by an independent third party, nor will they be subject to sufficient transparency requirements to ensure public scrutiny.

Ursula Pachl, Deputy Director General of the European Consumer Organisation (BEUC), said:

“The AI Act contains both good and bad points, but overall the measures to protect consumers are underwhelming. Consumers rightly worry about the power and reach of artificial intelligence and how it can lead to manipulation and discrimination, but the AI Act doesn’t sufficiently address these concerns. Too many issues have been left under-regulated, with an over-reliance on companies’ goodwill to self-regulate. For example, virtual assistants or AI powered toys will not be regulated enough as they are not considered high-risk systems. Also, systems like ChatGPT or Bard will not get the guardrails necessary for consumers to trust them.

“There are, nevertheless, some important provisions which will allow consumers to take action if they have been treated unfairly or have been harmed. For example, they will be able to go to court as a group if an AI system has caused mass harm. It is also positive that the hugely unfair and arbitrary practice of social scoring will not be allowed, while insurance using AI is considered high-risk, meaning it will be subject to extra requirements.  

“Overall, the AI Act should have done more to protect consumers. It is now crucial that authorities make sure this legislation is properly enforced to protect consumers as much as possible. Other legislation such as product safety, the GDPR and consumer law must be used to provide an additional safety net for consumers. Consumers should never be left unprotected or powerless when AI is used to make decisions about them.”

Download:

Image
Smart speaker in kitchen
09.12.2023 - PDF Document - 93.53 KB

Available in English
Contact Card
Sébastien Pant, BEUC
Sébastien Pant
Deputy Head of Communications