An Amazon chatbot that’s supposed to surface useful information from customer reviews of specific products will also recommend a variety of racist books, lie about working conditions at Amazon, and write a cover letter for a job application with entirely made up work experience when asked, 404 Media has found.

  • @TORFdot0@lemmy.world
    link
    fedilink
    English
    99 months ago

    Well put. I think tackling the bias will always be a challenge. It’s not that we shouldn’t, but how is the question.

    I don’t know if any of the big public LLMs are trying to trim biases from their training data or are just trying to ad-hoc tackle it by injecting modifiers into the prompts.

    That’s the biggest problem I have personally with LLMs is that they are untrustworthy and often give incorrect or blatantly false information.

    Sometimes it can be frustrating when I run across the “I can’t do that because of ethics” on benign prompts that I felt like it shouldn’t have but I don’t think it’s been that big a deal.

    When we talk about political conservatives being opposed to biased LLMs, it’s mostly because it won’t tell them that their harmful beliefs are correct

    • @dumpsterlid@lemmy.world
      link
      fedilink
      English
      6
      edit-2
      9 months ago

      When we talk about political conservatives being opposed to biased LLMs, it’s mostly because it won’t tell them that their harmful beliefs are correct

      “What because I think Islam is inherently a violent religion now this chatbot is telling me I AM the one with violent and harmful beliefs???” - some loser, maybe elon musk or maybe your uncle, who cares.