Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won’t help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

  • @douglasg14b@lemmy.world
    link
    fedilink
    3710 months ago

    Isn’t there semi-automated tools that can detect CP?

    Those might be an automated solution to at least cut down on the volume.

    The same can go for banned images. These can automatically be identified with perceptual hashing, and automatically be denied when uploading.

    • @cley_faye@lemmy.world
      link
      fedilink
      1910 months ago

      There are. They are easy to abuse too, and cause a lot of other issues, along with not being 100% efficient. Apple tried that and was met with appropriate backlash. There are many issues : training data is one, but also it is impossible to rule out false positive/false negative automatically, and it is relatively easy (for now) to doctor pictures to pass through. A nefarious actor could easily bypass these. There’s also the case of false positive that can very quickly push some bystander under the bus, and knowing how moderate and understanding internet is… yup.

      It is also risky, as this will effectively be a censorship tool; they are often setup in the pretense of “helping”, but once they’re up, it all depends on who steers it. Such responsibility can hardly fall on moderators/admin of lemmy, and it would also be problematic to handle them on a more nationwide (or more) level, since it would give incredible censorship power to authorities.

      And a bigger bottom-line is that working hard to prevent these kind of content from reaching us, while it have an obvious upside, also does nothing in respect to the actual issue of the content existing and being created.

      tl;dr: there is no easy solution that doesn’t come with one hell of a string attached to it.

      On the other hand, it is quite hard to hide yourself from authorities online, and this kind of behavior (I hope, somehow, that the people that posted these only do so to be toxic to lemmy and not to actually disseminate content) should lead to some action from authorities, getting to actual people and subsequently moving upward the stream to actually act on it. Hopefully.

    • Rolivers
      link
      fedilink
      1310 months ago

      The one thing AI would be good for… Humans shouldn’t have to see that shit.

      • @Zeth0s@lemmy.world
        link
        fedilink
        1210 months ago

        Humans have to see that shit to train ai. That is why it is so difficult to find a model for that

          • A lot of the mods for big providers like FB require counseling after the horrible crap they see (not just CSAM, but also terrible things like animal abuse and mutilation, etc). Unfortunately, the big companies have outsourced much of it to other countries where there aren’t as many worker protections, traumatizing people and replacing them when they can’t meet some arbitrary metric.

        • @HikingVet
          link
          1110 months ago

          IIRC there is a database that law enforcement uses during investigations to obtain access to these groups (they obtain consent from the victims to use the material).

          • Ɀeus
            link
            fedilink
            English
            7
            edit-2
            10 months ago

            the “risk” of false positives comes down to the consequence. if the consequence is being stuck in the slammer, don’t use ai. if the consequence is you can’t upload the image unless you manually appeal, or even maybe have to use an external image host; i think ai is fine

            edit: ah bugger, wrong acct. ah well

            (please tag @zeus@lemm.ee if you want me to see your response)

    • Hello Hotel
      cake
      link
      fedilink
      English
      610 months ago

      Apple is building the tech, too they arent sharing it with the world for humanitarian purposes like this.

    • Hello Hotel
      cake
      link
      fedilink
      English
      010 months ago

      Apple willingly made that tech, however its designed to get the individual in trouble by looking at the photos from their personal device and snitching on its owner (leaking them in the process). They can easly adapt it into a tool for social media, but it goes against their beleafs.