top of page
  • Writer's pictureIQONIC.AI

Unmasking Deepfakes: AI, Ethics, and the Battle Against Misinformation

Deepfakes are deceptively real-looking, manipulated image, audio or video recordings and are generated with the help of artificial intelligence. They are often published for fun, entertainment – or activism by imitating faces or voices in a deceptively real way. With the help of programs, apps or other corresponding software, almost anyone is able to create super realistic looking videos. Faces and voices can be swapped in real time and people can be portrayed in a different context. People say things they have never said or do things they have never done. While this can be fun in some ways, there are also dangers such as manipulation and disinformation.

"Altered photo/video" warning message on an Instagram post, which Instagram uses to flag digitally altered images and prevent the spread of deep fakes and false information.
Instagram has begun flagging digitally altered images as false information.

Abuse Potential of Deepfakes for Society

Deepfakes can pose a major threat to society and politics. Especially if they are used to manipulate public opinion and influence political processes in a targeted manner. Disinformation campaigns could be accompanied by deepfakes more and more frequently in the future, thereby amplifying their effect. The quality of deepfakes is constantly improving because computing power is increasing and the performance of artificial intelligence is developing in leaps and bounds. It is becoming increasingly difficult to distinguish between authentic and artificially created or altered material. The rapid spread via social media increases the potential danger.

How can the negative consequences of deepfakes be minimized while still reaping the benefits for society?

There are only a few international and national regulations in the area of deepfakes. In Germany, the Network Enforcement Act (NetzDG), which also covers the handling of AI manipulation or disinformation, has been used as a guideline to date. The platforms themselves are usually regulated more strictly than the legal provisions of the NetzDG. They delete or flag the relevant content based on their own house rules and ethics codes. This is particularly important when it comes to user-generated content.

The Federal Ministry's “Research for Technological Sovereignty and Innovation” department in Germany is also responsible for funding research projects to identify and combat disinformation and deepfakes. For example, scientists are developing programs that also use artificial intelligence to check the authenticity of videos and audio content and thus identify fakes. 

Another form of protection is the teaching of digital and media skills. After all, news and digital information literacy are crucial to being able to recognize disinformation and deepfakes in particular.

What can I do to recognize deepfakes?

  • Ensure good image quality, the larger the image size, the easier it is to recognize inconsistencies in the image. Therefore, it is best not to watch sensitive videos on a phone, but on a larger monitor. Good color settings also show inconsistencies, for example in the skin tone. 

  • Pay attention to the person's facial expressions: are there any natural reactions, such as blinking, frowning or frown lines? These cannot yet be depicted well by AIs.

  • Watch a video slowly or more than once to recognize any distortions!

  • Check the source for reliability!

Although AI offers many advanced possibilities, it must always be questioned and treated with caution in order to recognize and manage its risks accordingly!

9 views0 comments

Recent Posts

See All


bottom of page