ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women’s Hospital found that cancer treatment plans generated by OpenAI’s revolutionary chatbot were full of errors.

  • @LazyBane@lemmy.world
    link
    fedilink
    English
    1710 months ago

    People really need to get in their heads that AI can “hallucinate” random information and that any implementation on an AI needs a qualified human overseeing it.

    • @grabyourmotherskeys@lemmy.world
      link
      fedilink
      English
      210 months ago

      Exactly, it’s stringing together information in a series of iterations, each time adding a new inference consistent with what came before. It has no way to know if that inference is correct.