Finally, someone is taking action against the CSAM machine operating seemingly without penalty.
I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
No abuse of a real minor is needed.
As good as Australia's little boobie laws.
> CSAM does not have a universal definition.
Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.
> In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.
No corroboration found on web. Quite the contrary, in fact:
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
https://rm.coe.int/factsheet-sweden-the-protection-of-childr...
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
> Are you from Sweden?
No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.
> Why do you think the definition was clear across the world and not changed "before AI"?
I didn't say it was clear. I said there was no disagreement.
And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
" I too doubt there's material disagreement between judicial definitions. "
Sources? Sorry , your gut feeling does not matter. Esspecially if you are not a lawyer
I have no gut feeling here. I've seen no disagreeing judicial definitions of CSAM.
Feel free to share any you've seen.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
Thanks. For a moment I slipped and fell for the "AI" con trick :)
> - in any current law.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
> It has been since at least 2012 here in Sweden. That case went to our highest court
This one?
"Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.
> and they decided a manga drawing was CSAM
No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.
In Swedish:
https://www.regeringen.se/contentassets/5f881006d4d346b199ca...
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
Where do these people come from???
The lady doth protest too much, methinks.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.
> You cant use your common law experience to interpret the law in other countries.
That interpretation wasn't mine. It came from the Court of Europe doc I linked to. Feel free to let them know its wrong.
So aggressive and rude, and over... CSAM? Weird.
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
> Are you implying that it's not abuse to "undress" a child using AI?
Not at all. I am saying just it is not CSAM.
> You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools.
Its terrible. And when "AI"s are found spreading deepfakes around schools, do let us know.
CSAM: Child Sexual Abuse Material.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
It doesn't mention grok?
Sure does. Twice. E.g.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
CTRL-F "grok": 0/0 found
You're using an "AI" browser? :)
I found 8 mentions.
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?
Unlikely, if only because the statement doesn't mention CSAM. It does say:
"Among potential crimes it said it would investigate were complicity in possession or organised distribution of images of children of a pornographic nature, infringement of people's image rights with sexual deepfakes and fraudulent data extraction by an organised group."
I don't understand your point.
In a further comment you are using a US-focused organization to define an English-language acronym. How does this relate to a French investigation?
Item one in that list is CSAM.
You are mistaken. Item #1 is "images of children of a pornographic nature".
Wheras "CSAM isn’t pornography—it’s evidence of criminal exploitation of kids." https://rainn.org/get-informed/get-the-facts-about-sexual-vi...
You're wrong - at least from the perspective of the commons.
First paragraph on Wikipedia
> Child pornography (CP), also known as child sexual abuse material (CSAM) and by more informal terms such as kiddie porn,[1][2][3] is erotic material that involves or depicts persons under the designated age of majority. The precise characteristics of what constitutes child pornography vary by criminal jurisdiction.[4][5]
Honestly, reading your link got me seriously facepalming. The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn. While i'd agree that sexualizing kids is disgusting, denying that it's porn on that grounds is feels kinda... Childish? Like someone holding their ears closed and shouting loudly in order not to hear the words the adults around them are saying.
> First paragraph on Wikipedia
"...the encyclopedia anyone can edit." Yes, there are people who wish to redefine CSAM to include child porn - including even that between consenting children committing no crime and no abuse.
Compare and contrast Interpol. https://www.interpol.int/en/Crimes/Crimes-against-children/A...
> The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn.
I have no idea how anyone could reasonably draw that conclusion from this thread.
Well, RAINN are stupid then.
CSAM is the woke word for child pornography, which is the normal.word for pornography involving children. Pornography is defined as material aiming to sexually stimulate, and CSAM is that.
> CSAM is the woke word for child pornography
I fear you could be correct.
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
They could shut it off out of a sense of decency and respect, wtf kind of defense is this?
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?
Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
>I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms
I think we are getting very close the the EU's own great firewall.
There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?
- fine harvesting mechanism? Keep as-is.
- true user protection? Blacklist.
Or the companies could obey the law
In an ideal world they'd just have an RSS feed on their site and people, journalists, would subscribe to it. Voilà!
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
Full marks to France for addressing its higher than average rate of unemployment.
/i
> when notified, doing nothing about it
When notified, he immediately:
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:
https://www.bbc.com/news/articles/c98p1r4e6m8o
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
> Which is an entirely different legal liability.
In UK, it is entirely the same. Near zero.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
I thought this was about France
It was... until it diverted. https://news.ycombinator.com/item?id=46870196
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
“Sorry I broke the law. Oops for reals tho.”
Kiddie porn but only for the paying accounts!
The other LLMs probably don't have the training data in the first place.
Er...
"Study uncovers presence of CSAM in popular AI training dataset"
I suppose those are the offices from SpaceX now that they merged.
So France is raiding offices of US military contractor?
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
Even if it is, being affiliated with the US military doesn't make you immune to local laws.
https://www.the-independent.com/news/world/americas/crime/us...
I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States
Another discussion: https://news.ycombinator.com/item?id=46872894
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.
This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
Lmao they literally made a broad accessible CSAM maker.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".
This is precisely the point of the comment you are replying to: a balance has to be found and enforced.
In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
If libeling real people is a harm to those people, then altering photos of real children is certainly also a harm to those children.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
It doesn't have to be an assertion, or even a written statement.Marginal note:Mode of expression (2) A defamatory libel may be expressed directly or by insinuation or irony (a) in words legibly marked on any substance; or (b) by any object signifying a defamatory libel otherwise than by words.”You're quoting Canadian law.
In the US it varies by state but generally requires:
A false statement of fact (not opinion, hyperbole, or pure insinuation without a provably false factual core).
Publication to a third party.
Fault
Harm to reputation
----
In the US it is required that it is written (or in a fixed form). If it's not written (fixed), it's slander, not libel.
The relevant jurisdiction isn't the US either.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
Really? By what US definition of CSAM?
https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.
Very different charges however.
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
I like your username, by the way.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
The issue is still not really speech.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
I wouldn't equate the two.
There's someone who was being held responsible for what was in encrypted chats.
Then there's someone who published depictions of sexual abuse and minors.
Worlds apart.
>but I do really like a heterogenous cultural situation
Why isn't that a major red flag exactly?
Hi there - author here. Care to add some specifics? I can imagine lots of complaints about this statement, but I don't know which (if any) you have.