He allegedly used Stable Diffusion, a text-to-image generative AI model, to create “thousands of realistic images of prepubescent minors,” prosecutors said.

The headline/title needs to be extended to include the rest of the sentence

“and then sent them to a minor”

Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.

Edit: made the quote bigger for better visibility.

@cley_faye@lemmy.world
link
fedilink
English
287M

That’s a very important distinction. While the first part is, to put it lightly, bad, I don’t really care what people do on their own. Getting real people involved, and minor at that? Big no-no.

@MeanEYE@lemmy.world
link
fedilink
English
2
edit-2
4M

deleted by creator

@Darkard@lemmy.world
link
fedilink
English
-67M

And the Stable diffusion team get no backlash from this for allowing it in the first place?

Why are they not flagging these users immediately when they put in text prompts to generate this kind of thing?

@sugartits@lemmy.world
link
fedilink
English
47M

No no no guys.

It’s perfectly okay to do this as this is art, not child porn as I was repeatedly told and down voted when I stated the fucking obvious

So if it’s art, we have to allow it under the constitution, right? It’s “free speech”, right?

Well yeah. Just because something makes you really uncomfortable doesn’t make it a crime. A crime has a victim.

Also, the vast majority of children are victimized because of the US’ culture of authoritarianism and religious fundamentalism. That’s why far and away children are victimized by either a relative or in a church. But y’all ain’t ready to have that conversation.

@sugartits@lemmy.world
link
fedilink
English
-47M

That thing over there being wrong doesn’t mean we can’t discuss this thing over here also being wrong.

So perhaps pipe down with your dumb whataboutism.

It’s not whataboutism, he’s being persecuted because of the idea that he’s hurting children all the while law enforcement refuses to truly persecute actual institutions victimizing children and are often colluding with traffickers. For instance LE throughout the country were well aware of the scale of the Catholic church’s crimes for generations.

How is this whataboutism.

@sugartits@lemmy.world
link
fedilink
English
-17M

Because it’s two different things.

We should absolutely go after the Catholic church for the crimes committed.

But here we are talking about the creation of child porn.

If you cannot understand this very simple premise, then we have nothing else to discuss.

They’re not two different things. They’re both supposedly acts of pedophilia except one would take actual courage to prosecute (churches) and the other which doesn’t have any actual victims is easy and is a PR get because certain people find it really icky.

@sugartits@lemmy.world
link
fedilink
English
-47M

I guess we’re done here then.

Retoffelnoster
link
fedilink
English
27M

deleted by creator

Bad title.

They caught him not simply for creating pics, but also for trading such pics etc.

That’s sickening to know there are bastards out there who will get away with it since they are only creating it.

I’m not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don’t brag about it and never give it to anybody - what harm is done?

Even if the AI didn’t train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that’s acceptable IMO.

Nora
link
fedilink
English
10
edit-2
7M

I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can’t tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.

Most people down vote the idea on their gut reaction tho.

Looks like they might do it on their own.

My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn’t noticeably decreased the amount produced.

Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.

Nora
link
fedilink
English
27M

The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.

@Ibaudia@lemmy.world
link
fedilink
English
27M

Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.

@SeattleRain@lemmy.world
link
fedilink
English
21
edit-2
7M

America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.

I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won’t find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don’t actually hurt anyone.

It’s pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it’s next to impossible to convincingly fight off pedo accusations if you’re being mobbed) and aren’t actually interested in fighting pedophilia.

@badbytes@lemmy.world
link
fedilink
English
67M

Breaking news: Paint made illegal, cause some moron painted something stupid.

@cley_faye@lemmy.world
link
fedilink
English
27M

I’d usually agree with you, but it seems he sent them to an actual minor for “reasons”.

These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

TheHarpyEagle
link
fedilink
English
37M

It feels incredibly gross to just say “generated CSAM is a-ok, grab your hog and go nuts”, but I can’t really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that’s a slippery slope that leads to “video games cause school shootings” type of logic.

I don’t know, it’s a very tough thing to untangle. I guess I’d just want to know if someone was doing that so I could stay far, far away from them.

@yamanii@lemmy.world
link
fedilink
English
47M

partly because they are obvious fictions

That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.

@nucleative@lemmy.world
link
fedilink
English
27M

Well thought-out and articulated opinion, thanks for sharing.

If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.

I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.

Does this mean the AI was trained on CP material? How else would it know how to do this?

@joel_feila@lemmy.world
link
fedilink
English
17M

Well some llm have been caught wirh cp in their training data

Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.

Create a post

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


  • 1 user online
  • 177 users / day
  • 402 users / week
  • 1.13K users / month
  • 3.98K users / 6 months
  • 1 subscriber
  • 7.77K Posts
  • 87.7K Comments
  • Modlog