• 👍Maximum Derek👍
    link
    fedilink
    English
    258 months ago

    If you train something off the internet it’s bound to come out a bit racist. And I like to think that, thanks to me, it’s also slightly biased against people who put ranch dressing on pizza.

    • @Karlos_Cantana@sopuli.xyz
      link
      fedilink
      88 months ago

      I hope you get banned for your hateful and biggoted comments. I also hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma’am, or whatever pronoun you prefer), are a loathsome person and I’m ashamed to be in the same species as you.

      • @java@beehaw.org
        link
        fedilink
        48 months ago

        I hope you get banned for your hateful and bigoted comments. I also hate people who hate ranch on pizza, but I care about the underrepresented class of people who do. They are humans and deserve all the rights as you or I. I am appalled that this kind of blatant hatred still exists in 2023. You, sir (or ma’am, or whatever pronoun you prefer), are a loathsome person and I’m ashamed to be in the same species as you.

    • @Luke_Fartnocker@lemm.ee
      link
      fedilink
      148 months ago

      I completely disagree with you. Maybe it’s because I’m old, but I don’t want any damned racist robot doctor telling me what to do. I just want my good old human, racist doctor treating me; like God intended.

      • @DavidGarcia@feddit.nl
        link
        fedilink
        68 months ago

        yeah, as it stands in this current healthcare paradigm, 90% of doctors are practically useless beyond the most obvious diagnosis. I’d rather not have to wait until that paradigm changes 100 years from now…

        doctors be like (examples I’ve actually seen with friends and family):

        “take this Accutane that will fuck up your life forever for something that can be fixed with diet changes”

        “It’s just stress, take it easy” turns out to be cancer

        I can see the case for banning AI in almost every sector, but for medicine the upside is just too great to pass up. And even if it’s only used for anamnesis to point you and your healthcare providers in the right direction.

        • @thepianistfroggollum@lemmynsfw.com
          link
          fedilink
          English
          58 months ago

          Yup. My wife has a family history of lupus, has kidney issues, had a serious b12 deficiency, and pretty much every other symptom of lupus, but a negative ANA panel, so it can’t be lupus (a negative ANA doesn’t rule it out completely).

          When she went in because she was having neuropathic pain, which is very common in lupus and b12 deficiencies, she was told it was probably from her covid vaccine.

          What sucks the most is I, a 6’3 male, actually gets taken seriously by the same doctors. It’s bad enough that I have to go with her to appointments so there’s a chance of her being taken seriously.

        • TehPers
          link
          fedilink
          English
          58 months ago

          My favorite was a doctor who told two separate people in my wife’s family to apply tiger balm to the affected area until the pain goes away. After a second diagnosis, the first issue turned out to be an issue with the growth plate in their wrist, and the second issue was a broken leg. Both required surgery.

          Doing just about anything that isn’t going to the first doctor would have been a more productive use of time. At least a crappy chatbot could tell them to use tiger balm right away without leaving the house.

  • @Fizz@lemmy.nz
    link
    fedilink
    8
    edit-2
    8 months ago

    Ok but you could find studies that show doctors or any staff perpetuates racism. Seems like it would be less offensive coming from a computer.

  • @ailiphilia@feddit.it
    link
    fedilink
    58 months ago

    It doesn’t appear to be limited to racism.

    Humans inherit artificial intelligence biases

    Artificial intelligence recommendations are sometimes erroneous and biased. In our research, we hypothesized that people who perform a (simulated) medical diagnostic task assisted by a biased AI system will reproduce the model’s bias in their own decisions, even when they move to a context without AI support. In three experiments, participants completed a medical-themed classification task with or without the help of a biased AI system. The biased recommendations by the AI influenced participants’ decisions. Moreover, when those participants, assisted by the AI, moved on to perform the task without assistance, they made the same errors as the AI had made during the previous phase. Thus, participants’ responses mimicked AI bias even when the AI was no longer making suggestions. These results provide evidence of human inheritance of AI bias.

  • @intensely_human@lemm.ee
    link
    fedilink
    48 months ago

    The photo here is hilarious. It’s like the computer just said something horrible and they’re both trying to wrap their heads around it