• @RotaryKeyboard
    link
    English
    1204 months ago

    Using AI to flag footage for review by a person seems like a good time-saving practice. I would bet that without some kind of automation like this, a lot of footage would just go unreviewed. This is far better than waiting for someone to lodge a complaint first, since you could conceivably identify problem behaviors and fix them before someone gets hurt.

    The use of AI-based solutions to examine body-cam footage, however, is getting pushback from police unions pressuring the departments not to make the findings public to save potentially problematic officers.

    According to this, the unions are against this because they want to shield bad-behaving officers. That tells me the AI review is working!

    • @jaybone@lemmy.world
      link
      fedilink
      English
      264 months ago

      I bet if they made all footage publicly available, watchdog style groups would be reviewing the shit out of that footage. But yeah AI might help too maybe.

      • @Scubus@sh.itjust.works
        link
        fedilink
        English
        124 months ago

        While I agree wholeheartedly, that is unrealistic due to laws. You can’t reveal certain suspects identity because for certain crimes, like pedophilia, people will attempt to execute the suspect before they know whether or not they actually did it.

        • @LarmyOfLone@lemm.ee
          link
          fedilink
          English
          64 months ago

          I mean police footage would be privacy invading as hell for victims and even just bystanders.

        • @gaylord_fartmaster@lemmy.world
          link
          fedilink
          English
          14 months ago

          A charge being filed against someone is already public record in the majority of areas in the United States, as well as any court records resulting from those charges.

            • @gaylord_fartmaster@lemmy.world
              link
              fedilink
              English
              24 months ago

              Then they could just withhold the video from public since they’re already withholding the charge. The real issue would be protecting victims, not suspects.

    • Null User Object
      link
      fedilink
      English
      12
      edit-2
      4 months ago

      Exactly, and this also contradicts the “few bad apples” defense. If there were only a few bad apples, then the police unions should be bending over backwards to eradicate them sooner than later to protect the many good apples, not to mention improve the long suffering reputation of police.

      Instead, they’re doing the exact opposite, making it clear to anyone paying attention that it’s mostly, if not entirely, bad apples.

      • @Rai@lemmy.dbzer0.com
        link
        fedilink
        English
        124 months ago

        You’ve got it backwards.

        The phrase is “A few bad apples spoil the bunch”. It means everyone around the bad apples is also bad, because they’re all around and do nothing about it. It’s not a defense, it’s literally explaining what your comment says.

        • @Ithi@lemmy.ca
          link
          fedilink
          English
          94 months ago

          I think that poster is right in this context. It gets abbreviated and used as a defense of there just being “a few bad apples” and they they just drop/ignore the reset of the phrase.

    • @Ottomateeverything@lemmy.world
      link
      fedilink
      English
      104 months ago

      The whole police thing and public accountability kinda makes sense, but I don’t think this means we should be pushing on AI just because the “bad guys” don’t like it.

      AI is full of holes and unknowns. And relying on it to do stuff like this is a dangerous precedent IMO. You absolutely need someone reviewing it, yes. But they’re also not going to catch everything and starting with this will mean it will start being leaned on and it will replace thorough reviews by people.

      I think something low stakes and unobtainable without the tools might make sense - like AIs reading through game chat or Twitter posts to identify issues where it’s impossible to have someone reading everything, and if some get by, oh well it’s a post on the internet.

      But with police behavior? Those are people with the authority to ruin people’s lives or kill them. I do NOT trust AI to catch every problematic behavior and this stuff ABSOLUTELY should be done by people. I’d be okay with it as an aid, in theory, but once it’s doing any “aiding” it’s also approving some behavior. It can’t really be telling anyone where TO look without implying where NOT to look, and that gives it some authority, even as an “aid”. If it’s not making decisions, it’s not saving anyone any time.

      Idk, I’m all for the public accountability and stuff like that here, but having AI make decisions around the behavior of people with so much fucking power is horrifying to me.

      • @harmsy@lemmy.world
        link
        fedilink
        English
        54 months ago

        An AI art website I use illustrates your point perfectly with its attempt at automatic content filtering. Tons of innocent images get flagged, meanwhile problem content often gets through and has to be whacked manually. Relying on AI to catch everything, without false positives, is a recipe for disaster.

          • @deranger@sh.itjust.works
            link
            fedilink
            English
            24 months ago

            I really don’t think it’s better than nothing. You put a biased AI in charge of reviewing footage and now they have a reason to say they’re doing the right thing instead of doing nothing, despite what they’re doing being worse.

      • HarkMahlberg
        link
        fedilink
        24 months ago

        Man, you said everything I wanted to in less than half the words. Shoulda just linked to your comment lol