Also, don’t leave your account unused, delete it. User and follower numbers count.

And least as important, reply (if necessary to another corporate mail address) every email with Twitter/X in the footer, with a kind request to stop promoting and facilitating X.

https://bio.link/everyonehateselon

  • Schadrach
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    10
    ·
    7 hours ago

    There are tons of good reasons to quit Xitter. Everyone should get on that.

    That said, drawing a sexualized image of a fictional child is not child abuse, even if a machine does it. It lack the whole, you know, child being abused part that is kinda central to child abuse. Kind of like how South Park depicting Kristi Noem shooting a dog basically every time she appears on screen last season did not constitute numerous cases of animal abuse for either the South Park guys or Kristi Noem.

    Now, if it’s editing pictures of actual children to sexualize them publicly, shouldn’t that also fall under revenge porn in addition to being child abuse? Time for at least a civil suit in that case, I hope.

    • 0x0@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      That said, drawing a sexualized image of a fictional child is not child abuse, even if a machine does it. It lack the whole, you know, child being abused part that is kinda central to child abuse.

      I get that, but the underlying detail is that for the “AI” to generate that, it most likely already saw it to begin with. Right-wing AI trained on child-porn, weird, i know, but it’s this timeline.
      Plus if users are prompting for that… perhaps as a platform you don’t want those users.

      • Schadrach
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Not necessarily. AI image generators are pretty good at synthesizing concepts together as well (basically they target something that scores high at both things and the result is usually something that fits both criteria). It’s why that fad of taking photos and “ghiblifying” them was a thing for a hot minute - it’s seen a lot of Studio Ghibli stuff and stuff done imitating that style and it’s got a source image that it can tweak to increase it’s “Studio Ghibli” rating, even if Ghibli had never created anything similar themselves. With the right model and settings you could very easily get it to generate something like what it might look like were Studio Ghibli to draw Biblically accurate angels despite Ghibli not being known for their ophanim characters.

        In this context it wouldn’t necessarily need to have seen CSAM, but having seen enough sexual images and enough innocent images of children it could likely (how well depending on the model) piece together a passable wholly fictional CSAM image. I’d expect it to likely take a few tries to stop giving you children with big tits and bush, unless you were explicit about not wanting those things just because those are going to be common elements in the training data it’s trying to fit to and it doesn’t know that’s not how those concepts fit together.

        That said, if they just scraped every image off Twitter there’s probably at least a couple CSAM images included unless they were careful about purging them - that happens on basically every site with user posted content. You could use photo ID hashes to filter out known CSAM images (most big sites do that in general - hash image, check against blacklist, flag for verification on a match), but not novel ones - ironically training an AI on a big ol’ pile of CSAM would be a more effective way to filter for novel CSAM but impossible to do legally. That or manually checking every image posted on your site before it can appear, but that’s probably unreasonable for something like a social media network just due to scale.

    • iglou@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      What the fuck?

      Fictional child pornography is still child pornography. This is not just about abuse.

      Twitter having a tool to create child pornography is an excellent of reason to quit Twitter.

    • zalgotext@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 hours ago

      Yeah see that’s cool, everyone’s entitled to their opinion. My opinion is that anything that normalizes the sexualization of children should be shamed and shunned.

      • Schadrach
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        Sure, go for it. But the original claim in OP is that it’s a “built in child abuse tool”, and my position is that a drawing of a fictional child or childlike figure is fundamentally not child abuse, as no child was abused. That’s the start and end of it.

        • zalgotext@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          Ok so your position is just based on semantics then, because someone used the term “built in child abuse tool” instead of “built in child pornography generator”? Is that really a leg you wanna stand on?

          • Schadrach
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 hours ago

            Yes, I am standing on the position that words mean things, and you cannot have child abuse without an actual child being subjected to abuse.

            Much like how Rockstar doesn’t commit millions of murders and steal millions of cars every year by producing games in which those things are depicted, nor is GTA a murder and car theft tool - it’s not a murder if no one dies, and it’s not a car theft if no car is stolen even if you produce a fictional depiction of such things happening.

            There’s nothing magical about a fictional depiction of a fictional child being abused that makes it any different than a fictional depiction of any other fictional crime.

            A depiction of a thing is not that thing. See The Treachery of Images by René Magritte.

            • zalgotext@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              So you really were arguing that Twitter providing a built-in child porn generator isn’t a valid reason to leave it?

              • Schadrach
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 hour ago

                No, like I said in the original post, there are tons of reasons to leave Xitter, but child abuse requires a child be abused and so the OP image is wrong about it having a child abuse tool - you can’t make Xitter or Grok abuse a child.

                Unless Elon’s built something else entirely and we’re not just talking about Grok not refusing image generation requests that it probably ought to. Someone should probably check if the Xitter offices have a basement like a certain pizza place was accused of, just to be safe.

                • zalgotext@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  57 minutes ago

                  Ok let’s start over. Ignore the context in the previous comments, pretend I’m asking you this for the first time, with no lead up:

                  Do you think it’s reasonable to leave Twitter because they provide a tool that can be used to generate child porn?

    • AlreadyDefederated@midwest.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      What was it trained on, though?

      Even if what it produces is “not technically child abuse”, it was trained on how to make pretend child abuse, emulating what it has already seen. It uses previous child abuse as a guide to make stuff. It may also mix and match the real child abuse to make pretend child abuse. There might be real abused children in those images.

      That’s bad, right?

    • araneae@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      A computer program trained off millions of pictures of people that the program’s parent company acquired by hook or by crook and now any photo you posted of your kids is fair game to be recycled into the most heinous shit imaginable at the press of a button by the kind of slug still using Twitter. There is abuse happening there, when they decided to build a machine that could take family photos put online by wellmeaning people and statistically morph them ship-of-Theseus style into pornography with no safeguards.

      If I were a parent and even theoretically one pixel of hair on my child’s head were used as aggregate data for this mathematic new form of abuse by proxy, I’d be old testament mad. I would liken it to a kind of theft or even rape that we have no clear word for or legal concept of yet.

      I would suggest just not defending this stuff in any way because you’re simply going to lose, or from your perspective, be dogpiled by people having what you perceive to be an extreme moral panic over this issue.

      • Schadrach
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 hours ago

        statistically morph them ship-of-Theseus style

        That’s not really an accurate description of how it works. It doesn’t like have a big database of labelled photos that it looks up and grabs a few that sound similar to what it’s being asked for and then sort of collage those together. It’s basically seen a huge number of photos, been told what those photos are photos of, and from them devised a general model for what those things are, becoming more accurate as it sees more examples and then it gets handed a block of white noise and asked to show how that white noise looks like whatever it’s prompted to make. In painting is a little different in that it takes an existing image instead of white noise

        The training data isn’t part of the model itself (a big hint here should be the existence of LLM or image generation models that are ~10GB in size but were trained on literal terabytes or more of training data - that kind of compression would be absolutely insane and would be used in everything everywhere if the training data were actually part of the model). Several of them are even openly available and can be pretty easily ran locally on consumer hardware.

        …but yeah, somewhere some model saw a photo of you in training and changed a couple of weights somewhere in the network by some tiny fraction, ever so slightly adjusting it’s notions of what people look like and what the other things in that image look like, That doesn’t mean any image created by it henceforth is in any meaningful way a picture of you.

        I would liken it to a kind of theft or even rape that we have no clear word for or legal concept of yet.

        Like anyone who has ever seen a child or depiction of a child producing any sexually explicit illustration of any sort everafter, then? Because even human artists do not create ex nihilo, they start from a foundation of their cumulative experiences, which means anyone who has ever laid eyes on your child or a photo of your child is at some level incorporating some aspect of that experience however small into any artwork they create henceforth and should they ever draw anything explicit…