• @drawerair@lemmy.world
    link
    fedilink
    English
    17 months ago

    The article is too long for me. 2 of its main ideas are “Everyone using large-language models should be aware of ai hallucination and be careful when asking those models for facts.” and “Firms that develop large-language models shouldn’t downplay the hallucination and shouldn’t force ai in every corner of tech.”

    There was already so much misinformation on the Web before Chatgpt 3.5. There’s still so much misinformation. No need for the hallucination to worsen the situation. We need a reliable source of facts. Optimistically, Google, Openai or Anthropic will find a way to reduce or eradicate the hallucination. The Google ceo said they were making progress. Maybe true. Or maybe generic pr lie so folks would stop following up re the hallucination.