• @eldavi@lemmy.ml
    link
    fedilink
    English
    55 days ago

    they already banned gazans being genocided; but they still call themselves a safe space?!!!

    • ᴇᴍᴘᴇʀᴏʀ 帝
      link
      fedilink
      English
      34 days ago

      I saw some of those accounts and they looked like a scam. I’d seriously consider banning them on Lemmy if they popped up on my radar.

        • ᴇᴍᴘᴇʀᴏʀ 帝
          link
          fedilink
          English
          44 days ago

          In their Nov. 17 response, Bluesky explained that certain behaviors, such as gaining a large following in a short amount of time, could trigger automated spam filters, leading to account labels or removals.

          https://www.dailydot.com/debug/bluesky-palestine-moderation/

          I don’t know if they are scams (I’ve not seen evidence either way) but it’s very spammy behaviour - I’ve been followed by a couple of such accounts, despite not posting anything on there, (so I blocked them) and have seen a number of others get cross-posted that followed the exact same pattern of throwing out tens of thousands of follows but only promoting a GoFundMe page. I can’t really blame Bluesky as it looks like suspicious activity, which would set off alarm bells if they did that on here.

          • @eldavi@lemmy.ml
            link
            fedilink
            English
            2
            edit-2
            4 days ago

            The platform acknowledged the concerns raised by users and indicated they were working on changes to refine their moderation system and reduce unnecessary flagging.

            given the transphobia that’s happening right now on blueky; i would expect it to magically get fixed AFTER the genocide has concluded.

            • ᴇᴍᴘᴇʀᴏʀ 帝
              link
              fedilink
              English
              34 days ago

              Yeah, that’s a serious non-statement that basically says “we’ve heard you (so stop complaining) and might make some changes or we might not.”

              I think the problem is that their systems were right to flag up those accounts but what you actually do about them is tricky. They should really investigate to ensure that these are legitimate users but I presume the bulk of moderation is automated and checking the validity of the accounts is hard to do and would require time and resources devoted to it. So they may just feel that locking the accounts is the best approach, for them, anyway.