25 arrested in global hit against AI-generated child porn | Europol

25 people had their lives upended as a result of an ongoing Europol crackdown against users of an AI porn-sharing community. 17 governments across Europe colluded to deanonymize and arrest users of a digital platform who were accused of accessing fully-AI-generated porn with characters that allegedly appeared to be under 18.

According to Europol, most of the arrests were carried out on 26 February 2025, and over 200 users have been deanonymized. It is an ongoing operation and more arrests are expected in the coming weeks.

In its press release, Europol admitted that there were no actual victims but attempted to justify the raids against 25 private individuals based on the alleged “objectification of children.” There is no evidence that the people arrested have harmed anyone through their internet usage.

While the specific technical means by which Europol deanonymized users across multiple countries were not disclosed, the platform did include a paid subscription service that may have been traceable. Notably, the arrests occurred months after the administrator of the porn community was arrested. In the past, Western law enforcement agencies have arrested owners of internet platforms deemed illegal and continued to operate the website to ensnare users.

The internet porn raids conducted by European countries occurred alongside their attempts to mandate backdoors in private communication services that report users to the police for sharing banned pornographic content, which, in countries such as the UK and Australia, includes AI-generated and anime porn featuring characters that appear to be under 18. The Electronic Frontier Foundation and other civil liberties advocates have strongly condemned previous proposals to force mass scanning of private communications.

These two again!

allegedly appeared to be under 18

allegedly

That’s why AI shouldn’t have existed at all, because at the end of the day most people are just doing awful stuff with it. At this point I’ve never heard positive news about AI doing actual good stuff. But that’s how things are nowadays. :man_shrugging:

“That’s why encrypted messenger shouldn’t have existed at all, because at the end of the day most people are just doing awful stuff with it.
We have to protect our children, please ban all of this.”

6 Likes

AI is a wonderful tool that accompanies me in my daily work and saves me a considerable amount of time. Let’s avoid making generalizations and saying that because some lunatics use this tool for criminal content, it should be banned - because that’s the same kind of sh*tty argument governments use to kill WhatsApp, Signal, SimpleX and other encrypted apps.

Besides, we’re getting off the subject of confidentiality, which is the main real problem with AI, especially conversational AI. Personally, I use o1 models (most recently OpenAI’s o3), and their efficiency is impressive, but I have to accept the fact that they put my confidentiality aside until I find a model with the same performance that my laptop can run locally (which I don’t think will happen for a while, as my resources are limited).

I’m not sure where OP is getting the “allegedly” from. It’s quite clear from the Europol statement and articles in the press that the images are of children.

https://www.reuters.com/world/europe/two-dozen-arrested-distributing-ai-generated-child-abuse-images-2025-02-28/

Hmm. Syllogism? Isn’t this a very small minority? Ban it all for abuse by them?

No different to how cars can be used dangerously by a few lunatics. Not prudent to stop building roads or making/selling cars for just that?

The silver lining here is, the transgressors were pursued and jailed. The checks (by law enforcement) are working as intended.

Of course, the problem in the digital world (as in the real one, too) is identity can be anonymous, and hence the angling by govts all over for “backdoors”, in a fight to keep those checks in place. In the real world, state-given identity is almost mandatory in all instances (while purchasing/driving a car, for example)

But as with “people” (minority or not), those checks will similarly be abused by the “State”. And on & on such reasoning goes …

If one is advocating for a ban instead of a backdoor, I can understand that. But I’d rather they advocate for regulations instead of a blanket ban. Backdoors, I feel, are inevitable, as justification for it follows kind of frictionlessly from laws/regulation (you can’t enforce what you can’t govern). But backdoors, as designed, break most cryptographic guarantees… And on & on it goes …

4 Likes

AI-generated images do not contain photos of children; they are not photos.

4 Likes

That’s the same thing, it doesn’t matter whether those children are real or not. It’s something disgusting and these people should be “treated” for their disease because with AI accuracy, humain brain doesn’t make a difference between a real and a fake one.

That’s the same shit with l*licon images, the problem is the sexual attraction for children, there’s nothing sane with that (even if It’s easier to see the difference, but the problem will be the same).

1 Like

Crimes without victims should not be punished. The demand for the blood of people who do not commit real violent crimes is disgusting.

3 Likes

From a technical standpoint, I wonder if it is possible this operation was caused by the AI generators themselves scanning and reporting content.

In any case this is a poster child for using client-side models, keeping anonymity etc.

Is this an AI image of a “Child”?

Is the robo-girl 17 or 19? Its impossible to say; it depends if the government so decided to arrest someone. The legal definition of “child” in this case extends to below 18; they didn’t specify prepubescent or pubescent but just legally a “Child”. 17-year-olds and 20-year olds aren’t even visibly distinguishable; which is why forensic examiners have to either check ID or use a forensic odontologist to examine someone’s teeth structure, which can’t be done from an image, let alone an AI-generated character. This isn’t a theoretical issue, such governments have arrested innocent people for “child pornography”, when in reality looking at legal adults because they suspected the women were teenagers.

2 Likes

Who talked about blood ?

Psychological and medical help is needed for these people, but it also means isolating them from society during this process, as they can represent a danger to themselves and their children.

I think the photos concerned must have been much more explicit than your example for it to attract the attention of the authorities. I understand what you mean, but I don’t think this kind of border is worth debating, because it’s usually pretty incriminating.

We’re not talking about 17-19 year olds, but under 15-16.

1 Like

Sadly, since those are considered CSAM, we can’t really see and decide for ourselves.

Psychological and medical help is needed

Attraction does not necessarily involve action, and it does not necessarily have to be compulsive. Attraction to children in itself is not even considered pedophilia. To establish a diagnosis, certain criteria must be met, not just attraction.

See

Pedophilic Disorder to be diagnosed, the individual must have acted on these thoughts, fantasies or urges or be markedly distressed by them

https://icd.who.int/browse/2025-01/mms/en#517058174

1 Like

Not only talked:

On 14 July 2013, Ebrahimi was murdered by his neighbour, Lee James.[3] James had falsely accused Ebrahimi of being a paedophile

In general I have several problems with artificial intelligence, since its expansion has increased several problems that we already had present in the network, such as scams or spam. Although there would be valid reasons to think about what measures to take, this does not imply that we should not also be skeptical about the possible problems that could be caused by measures to combat these problems. In this case, the measures used here to combat the generation of fictitious sexual content appearing to be underage require, let’s be redundant, relying on the visual perception of the person as there are no such characters to objectively determine their age beyond what subjectively appears to us to be underage or not. In some cases it may be clear whether the representation is underage or not, but in others the line is not clearly defined, which would increase the risk of this measure being extended to other non-harmful content, such as adult women for not appearing to be so, and would increase censorship on the network. Coincidentally, several countries mentioned in the article already have a history of this problem:

ÂŤThe proposed Australian Government clampdown on smut just got a whole lot broader, as news emerged of a ban on small breasts and female ejaculation in adult material.Âť

1 Like

This post reads very strange if not misleading.

For clarity, if a quote comes from Europol I will note so by adding “(EP)”, and if it comes from the original post I will add “(OP)”.


Why make no mention in the title that the AI-generated porn was CSAM, depicting the suspects as “users of an AI porn-sharing community” (OP), and not going further than to state only once that the material in question “allegedly appeared to be under 18” (OP)? This was core to the entire operation - they weren’t arrested because it was AI generated but because they “were part of a criminal group whose members were engaged in the distribution of images of minors fully generated by artificial intelligence” (EP), with the “administrator of the porn community” (OP) being a suspect who is accused of running “an online platform where he distributed the AI-generated material he produced” (EP) for others willing to pay to “watch children being abused,” (EP) Europol alleged.

Why depict it like it is some sort of anime-stylized-17-or-18-years dillema? Europol makes no mention of age (apart from minors, obviously). It is just as likely it was for example photorealistic of much lower ages (if not more likely due to it being a paid platform).

Why take a clear stance on who is the victim here and acting like Europol is abusing its power? Stating the suspects of the criminal investigation “had their lives upended” (OP), Europol “attempted to justify” (OP) the operation, acting like AI-generated CSAM is harmless.

Why compare it to breaking E2EE for everyone by mass scanning of private communications, something that affects hundreds millions of users? This was a targeted operation limited to 273 suspects/users resulting in 25 arrests.


The entire post reads less like objective reporting on Europol’s press release and more like an opinion piece trying to convince you. There’s plenty to say about the fight between law enforcement and privacy, but this is not the same.

1 Like

AI needs to be trained on real data - so this particularly AI got very likely fed with thousands and thousands of real CSAM photos.

And that’s the point where AI-generated CSAM definitely is problematic. LLMs are known for producing results that are sometimes very close to an original.

Plus to train the AI you have to be in possession of the source material, which already is a crime in many countries.

7 Likes

Crazy thought but it seems to me the AI companies should be held in part responsible for possessing and incorporating such material into their AI models.

2 Likes