top of page
  • Writer's pictureIQONIC.AI

AI Manipulation In Advertising: An Unnoticed Risk In The EU AI Act?

The EU is currently taking the lead in regulating AI: the AI Act is the most comprehensive AI law in the world. The EU government in Brussels has been working on the legislation since 2021. The AI Act will categorize AI applications into risk classes: The more dangerous the AI, the stricter the laws. But is this really accurate in all cases? We want to take a closer look at the current state of the AI Act and especially the role of the advertising industry. Has it been misclassified?

An overview of the four risk classes of the EU AI Act and what they contain.
The four risk classes of the EU AI Act

The EU AI Act Risk-Classifications

At the beginning of December 2023, the member states of the European Union and the members of the European Parliament reached a political agreement on the AI Act. It was agreed that a distinction should be made between minimal, limited, high and unacceptable risk applications. The details are now being worked out. But this much is already certain:

AI systems for targeted mass manipulation or the suppression of human rights, such as social scoring systems, are considered an unacceptable risk and are completely prohibited.

AI applications for applicant assessment, credit scoring, autonomous driving or medical diagnostics are considered high-risk. There are strict requirements for providers, such as risk and quality management systems, data management, human supervision, maintenance of required technical documentation and much more.

Chatbots, manipulated image, audio or video content (deepfakes) and personalized advertising are considered a limited risk as long as they comply with disclosure requirements. These minimal transparency requirements allow users to make informed decisions. The users should be made aware when they are interacting with AI.

Minimal risk covers any other AI systems that do not fit into the above classes. There are no restrictions or mandatory obligations for those systems, but voluntary policies, following general principles such as non-discrimination and fairness, are encouraged.


Manipulation through advertising – A limited risk?

The "high risk" class refers to applications of AI that are highly dangerous to people's fundamental rights and well-being. One thing is clear: for the best possible and, above all, safe results in these areas, certain rules must be adhered to. But shouldn't the advertising industry also be considered a high risk, then?

AI-powered advertising technologies can use algorithms to analyze and target consumer behavior. By accurately analyzing user data, advertisers can create personalized content aimed at manipulating emotions and behaviors. AI can also be used to include misleading or false information in ads. This can confuse consumers and interfere with their ability to make informed decisions. AI can be used to create personalized content that reinforces addictive behavior. This is particularly true for platforms designed to maximize user attention and encourage repeat use. To address these dangers, the AI Act may need to establish rules and standards for the ethical use of AI in the advertising industry, including transparency rules, privacy provisions, and measures to prevent manipulation. Is the information that AI is being used really enough to enable informed decisions? A fine line.


Ultimately, it is to be hoped that those providers who really use AI to create value will prevail. Even if the AI Act has not yet been finally passed, companies can already start to assess risks and establish appropriate governance. Everyone who builds AI should address fundamental ethical issues.


To learn more about what the changes caused by the AI Act mean for startups like IQONIC.AI, read this article from The Pioneer.

7 views0 comments
bottom of page