Even though millions of people left Twitter in 2023 – and millions more are ready to move as soon as there’s a viable alternative – the fediverse isn’t growing.1 One reason why: today’s fediverse is unsafe by design and unsafe by default – especially for Black and Indigenous people, women of color, LGBTAIQ2S+ people2, Muslims, disabled people and other marginalized communities. ‌

  • @TheBeege@lemmy.world
    link
    fedilink
    English
    811 months ago

    Maybe I’m part of the problem, and if so, please educate me, but I’m not understanding why blocking is ineffective…?

    And block lists seem like an effective method to me.

    The security improvements described seem reasonable, so it would be nice to get those merged.

    I understand that curation and block lists require effort, but that’s the nature of an open platform. If you don’t want an open platform, that’s cool, too. Just create an instance that’s defederated by default and whitelist, then create a sectioned-off Fediverse of instances that align with your moderation principles.

    I feel like I’ve gotta be missing something here. These solutions seem painfully obvious, but that usually means I’m missing some key caveat. Can someone fill me in?

    • MHLoppy
      link
      fedilink
      4
      edit-2
      11 months ago

      I’m not understanding why blocking is ineffective…?

      As I understand it, because it requires harm to be experienced before the negating action is taken.

      A parallel might be having malware infect a system before it can be identified and removed (harm experienced -> future harm negated), vs proactively preventing malware from infecting the system in the first place (no harm experienced before negation).

      • @Haui@discuss.tchncs.de
        link
        fedilink
        English
        311 months ago

        Which is exactly how the real world works. Harm has to be identified to suggest solutions. Otherwise you‘re becoming the helicopter parent that denies their kid every opportunity to learn and cause allergies and other bad outcomes. Translated back to the fediverse: it is great the way it is and improvements are always encouraged. We have much bigger and more pressing issues. This is not it.

        • MHLoppy
          link
          fedilink
          5
          edit-2
          11 months ago

          Which is exactly how the real world works. Harm has to be identified to suggest solutions.

          According to the submission, some harms have been identified, and some solutions have been suggested [that could reduce the same and similar harms from occurring to new and existing users] (but mostly it sounds like a “more work needs to be done” thing).

          I imagine your perspective on the issues being discussed are different from those of the author. The helicopter parent analogy makes sense in a low-danger environment; I think what the author has suggested is that some people don’t feel like it’s a low-danger environment for them to be in (though I of course – not being the author or one such person – may be mistaken).

          Edit: [clarified] because I realised it might seem contradictory if read literally.

          • @TheBeege@lemmy.world
            link
            fedilink
            English
            111 months ago

            This makes sense, especially considering the features the author cited. The by design parts may just be for clickbait purposes

    • The Nexus of PrivacyOP
      link
      English
      311 months ago

      At some level you’re not missing anything: there are obvious solutions, and they’re largely ignored. Blocking is effective, and it’s a key part of why some instances actually do provide good experiences; and an allow-list approach works well. But, those aren’t the default; so new instances don’t start out blocking anybody. And, most instances only block the worst-of-the-worst; there’s a lot of stuff that comes from large open-registration instances like .social and .world that relatively few instances block or even limit.