I made a robot moderator. It models trust flow through a network that’s made of voting patterns, and detects people and posts/comments that are accumulating a large amount of “negative trust,” so to speak.

In its current form, it is supposed to run autonomously. In practice, I have to step in and fix some of its boo-boos when it makes them, which happens sometimes but not very often.

I think it’s working well enough at this point that I’d like to experiment with a mode where it can form an assistant to an existing moderation team, instead of taking its own actions. I’m thinking about making it auto-report suspect comments, instead of autonomously deleting them. There are other modes that might be useful, but that might be a good place to start out. Is anyone interested in trying the experiment in one of your communities? I’m pretty confident that at this point it can ease moderation load without causing many problems.

!santabot@slrpnk.net

  • @auk@slrpnk.netOP
    link
    fedilink
    English
    120 hours ago

    For example how do you think the bot would’ve handled the vegan community debacle that happened.

    That’s not a situation it’s completely equipped to handle. It can decide what the community’s opinion of someone is, but it’s not going to be able to approach any kind of judgement call, in terms of whether a post by a permitted user is unexpectedly dangerous misinformation that the admins need to remove. That’s a judgement call that humans can’t effectively come to a conclusion on, so definitely the bot won’t be able to do any better.

    There is some interesting insight to be had. One of the big concerns that people had about the bot’s premise was that it would shut down minority opinions, with vegans as a perfect example.

    I tried going back and having it judge https://lemmy.world/post/18691022, but there may not be recent activity for a lot of those users, so there’s a risk of false negatives. The only user it found which it wanted to do anything to was EndlessApollo@lemmy.world, who it wanted to greylist, meaning they’re allowed to post, but anything of theirs that gets downvotes will get removed. That sounds right to me, if you look at their modlog.

    I also spent some time just now asking it to look at comments from vegantheoryclub.com and modern comments from !vegan@lemmy.world, and it didn’t want to ban or greylist anybody. That’s in keeping with how it’s programmed. Almost all users on Lemmy are fine. They have normal participation to counterbalance anything unpopular that they like to say, or any single bad day where they get in a big argument. The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments. I think that broader picture of people’s participation, and leeway to get a little out of pocket for people who are normal human people, is useful context that the bot can include that would be time-prohibitive when human mods are trying to do it when they make decisions.

    The literal answer to your question is that I don’t think it would have done anything about the Vegan cat food issue other than letting everyone hash it out, and potentially removing some comments from EndlessApollo. But that kind of misinformation referee position isn’t quite the role I envisioned for it.

    Like you said it sounds like a good way of decentralizing moderation so that we have less problems with power tripping moderators and more transparent decisions.

    I wasn’t thinking in these terms when I made it, but I do think this is a very significant thing. We’re all human. It’s just hard to be fair and balanced all of the time when you’re given sole authority over who is and isn’t allowed to speak. Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.

    I just want it so that communities can keep their specific values while easing their moderation burden.

    This was a huge concern. We went back and forth over a big number of specific users and situations to make sure it wasn’t going to do this, back in the early days of testing it out and designing behaviors.

    I think the vegan community is a great example. I think there was one vegan user who was a big edge case in the early days, and they wound up banned, because all they wanted to talk about was veganism, and they kept wanting to talk about it to non-vegans in a pretty unfriendly fashion. I think their username was vegan-related also. I can’t remember the specifics, but that was the only case like that where the bot was silencing a vegan person, and we hemmed and hawed a little but wound up leaving them banned.

    • @Danterious@lemmy.dbzer0.com
      link
      fedilink
      English
      2
      edit-2
      19 hours ago

      The point is to pick out the users that only like to pick fights or start trouble, and don’t have a lot that they do other than that, which is a significant number. You can see some of them in these comments.

      Ok then that makes sense on why you chose these specific mechanics for how it works. Does that mean hostile but popular comments in the wrong communities would have a pass though?

      For example let’s assume that most people on Lemmy love cars (probably not the case but lets go with it) and there are a few commenters that consistently shows up in the !fuck_cars@lemmy.ml or !fuckcars@lemmy.world community to show why everyone in that community is wrong. Or vice a versa

      Since most people scroll all it could be the case that those comments get elevated and comments from people that community is supposed to be for get downvoted.

      I mean its not that much of a deal now because most values are shared across Lemmy but I can already see that starting to shift a bit.

      I was reminded of this meme a bit

      Initially, I was looking at the bot as its own entity with its own opinions, but I realized that it’s not doing anything more than detecting the will of the community with as good a fidelity as I can achieve.

      Yeah that’s the main benefit I see that would come from this bot. Especially if it is just given in the form of suggestions, it is still human judgements that are making most of the judgement calls, and the way it makes decisions are transparent (like the appeal community you suggested).

      I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.

      However if you aren’t planning on developing that side of it more I think you could probably still let the other moderators that want to test the bot see notifications from it anytime it has a suggestion for a community user ban (edit: for clarification) as a test run. Good luck.

      Anti Commercial-AI license (CC BY-NC-SA 4.0)

      • @auk@slrpnk.netOP
        link
        fedilink
        English
        118 hours ago

        Does that mean hostile but popular comments in the wrong communities would have a pass though?

        They have no effect. The impact of someone’s upvote is dependent on how much trust from the wider community that person has. It’s a huge recursive formula, almost the same as PageRank. The upshot is that those little isolated wrong communities have no power unless the wider community also gives them some upvotes. It’s a very clever algorithm. I like it a lot.

        For normal minority communities like vegans, that’s not a problem. They still get some upvotes, because the occasional conflict isn’t the normal state, so they count as normal users. They post stuff, people generally upvote more than they downvote by about 10 to 1, and they are their own separate thing, which is fine. For minority communities that are totally isolated from interactions with the wider community, they just have more or less 0 rank, so it doesn’t matter what they think. They’re not banned, unless they’ve done something, but their votes do almost nothing. For minority communities that constantly pick fights with the wider community, they tend to have negative rank, so it also doesn’t matter what they think, in terms of the impact of them mutually upvoting each other.

        I think it might be a good idea to set up “canary” communities, vegans being a great example, with the bot posting warnings if users from those communities start to get ranked down. That can be a safety check to make sure it is working the way it’s supposed to. Even if that downranking does happen, it might be fine, if their behavior is obnoxious and the community is reacting with downvotes, or it might be a sign of a problem. You have to look up people’s profiles and look at the details. In general, people on Lemmy don’t spend very much time going into the vegan community and spreading hate and downvotes just for the sake of hatred, because they saw some vegans being vegans. Usually there’s some reason for it.

        One thing that definitely does happen is people from that minority community going out and picking fights with the wider community, and then beginning to make a whining sound when the reaction is negative, and claiming that the heat they’re getting is because of their viewpoint, and not because they’re being obnoxious. That happens quite a lot.

        I think some of the instances that police and ban dissent set up a bad expectation for their users. People from there feel like their tribe is being attacked if they have to come into contact a viewpoint that they’re been told is the “wrong” one, and then they make these blanket proclamations about how their own point of view is God’s truth while attacking anyone who disagrees, and then they sincerely don’t expect the hostile response that they get. I think some of them sincerely feel silenced when that happens. I don’t know what to do about that other than be transparent and supportive about where the door to being able to post is, if they want to go through it, and otherwise minimizing the amount that they can irritate everyone else for as long as that’s their MO.

        I still think that instead of the bot considering all of Lemmy as one community it would be better if moderators can provide focus for it because there are differences in values between instances and communities that I think should reflect in the moderation decisions that are taken.

        It definitely does that. It just uses a more sophisticated metric for “value” than a hard-coding of which are the good communities and which are the bad ones.

        I think the configuration options to give more weight or primacy to certain communities are still in the code. I’m not sure. I do see what you’re saying. I think it might be wise for me, if anyone does wind up wanting to play with this, to give as many tools as possible to moderators who want to use it, and just let them make the decision. I think the bot is capable of working without needing configuration which ones are the good communities, but if someone can replicate my checking into it, they’ll be happier with the outcome whether or not they wind up with the same conclusions as me.

        And yes, definitely making it advisory to the moderators, instead of its own autonomous AI drone banhammer, will increase people’s trust.