• 5 Posts
  • 1.14K Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle







  • bamboo@lemm.eetoTechnology@lemmy.world*deleted by creator*
    link
    fedilink
    English
    arrow-up
    9
    ·
    4 months ago

    Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?

    I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.