• Cyrus Draegur
    link
    fedilink
    English
    24
    edit-2
    3 months ago

    in terms of communication utility, it’s also a very accurate term.

    when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

    when AIs hallucinate, it’s due to its predictive model generating results that do not align with reality because it instead flew off the rails presuming what was calculated to be likely to exist rather than referencing positively certain information.

    it’s the same song, but played on a different instrument.

    • kronisk
      link
      fedilink
      English
      53 months ago

      when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.

      Is it really? You make it sound like this is a proven fact.

      • Cosmic Cleric
        link
        fedilink
        English
        4
        edit-2
        3 months ago

        Is it really? You make it sound like this is a proven fact.

        I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.

      • KillingTimeItself
        link
        fedilink
        English
        23 months ago

        i mean, idk about the assumptions part of it, but if you asked a psych or a philosopher, im sure they would agree.

        Or they would disagree and have about 3 pages worth of thoughts to immediately exclaim otherwise they would feel uneasy about their statement.