Published on

It was a bold move by Giorgia Meloni. This week, the Italian Prime Minister posted an AI-generated image of herself in underwear, a picture that had already been circulating on social media. It was intended as a warning: if this can happen to a head of government, it can happen to anyone.

ADVERTISEMENT


ADVERTISEMENT

However, the European Union has already been working to tackle deep fakes.

So what exactly is changing?

On Thursday, the EU reached a deal to ban so-called “nudification” apps — the tools used to create non-consensual sexually explicit images and videos through artificial intelligence. The move matters because, while public figures may have lawyers and platforms at their disposal, most victims do not have such resources at hand.

The measure is part of a wider revision of the AI Act, the EU’s flagship legislation on artificial intelligence. The broader overhaul aims to streamline the original rules and make them less burdensome for companies developing AI technologies.

EU legislation usually advances at a glacial pace. But on this issue, Brussels has sped up the process. The ban on “nudification” apps is expected to become fully enforceable across the bloc by December.

What if you become a victim before then?

For now, existing privacy laws still offer some protection, particularly the EU’s GDPR framework. Because your image is considered personal data, individuals can invoke the “right to erasure” — a legal mechanism to force platforms to remove fake content.

At Euronews, we know this all too well. Our journalists and reporting have repeatedly been targeted by AI-generated fakes, with voices manipulated and images stolen by coordinated disinformation campaigns or even outlets like Russia Today.

We report these incidents whenever they occur. And if you find a deepfake of yourself, the advice is simple: do not wait to act – you should report it too.

Watch the Euronews video in the player above for the full story.

Share.
Leave A Reply