Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won’t help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @Striker@lemmy.world the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

  • molave
    link
    fedilink
    36
    edit-2
    10 months ago

    It’s one of the few things Reddit handles the situation better by being a centralized entity with a dedicated workforce filtering out these content. It’s a shame it has to be this way, but I understand why it has to be done.

    • @seitanic
      link
      1910 months ago

      So, Mastodon has this same problem?

      • @mongooseofrevenge@lemmy.world
        link
        fedilink
        2110 months ago

        Pretty much. I recently had my mastodon feed spammed with racist, homophobic, and gore-filled posts just because they would post with a list of unrelated hashtags. You could keep blocking the poster or the instance but they would pop back up from another instance or with another account. It eventually stopped but I’m sure it’ll happen again. You’re apparently able to filter out certain offensive terms with a filter but I think you have to manually enter the terms yourself.

        • @PeleSpirit@lemmy.world
          link
          fedilink
          English
          1310 months ago

          Twitter had that problem in the beginning, people forget that. I’ve seen some shitty stuff on Reddit as well and reported it, it’s a problem everywhere.

      • dantheclamman
        link
        fedilink
        1210 months ago

        There have been issues in the larger instances with slow or unresponsive moderation, leading to occasional bursts of bot activity

      • molave
        link
        fedilink
        210 months ago

        I don’t use it, so I can’t answer that.

      • @CoderKat@lemm.ee
        link
        fedilink
        English
        6
        edit-2
        10 months ago

        That’s because Reddit chose to leave it up until the media reported on it, though.

        That said, it’s really hard to protect against a dedicated, targeted attack. Eg, stuff like captchas can make it harder to create accounts, but think about how fast you could make accounts manually if you wanted to. You don’t need thousands of accounts to cause mayhem. Even a few dozen can cause serious problems. I think a lot of the internet depends on the general good will of most users. Plus the threat of legal action if they get caught (but that basically requires depending on police and we know police aren’t dependable).

        One thing Reddit had that I’m not sure Lemmy does (never heard mentions of it) is the option to require all posts and comments to be approved by a mod before it’s visible. That might even have just been an automod thing combined with how Reddit let admins hide and unhide comments. But even if they were to use that, it’s not fair for volunteer mode to have to deal with that. It’s also sooo much work. You can’t just approve posts, cause attackers will use comments. And you have to approve edits or attackers will post something innocent and then edit it to be malicious. And even without an edit, they can link to an image and then change the file itself to a different one (checksums could prevent that, but it’s more work and it’s a constant battle against malice).