The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con

The new era of tech seems to be built on superstitious behaviour

  • matjoeman
    link
    fedilink
    1
    edit-2
    1 year ago

    There are two possible explanations for this effect:

    1. The tech industry has accidentally invented the initial stages a completely new kind of mind, based on completely unknown principles, using completely unknown processes that have no parallel in the biological world.
    2. The intelligence illusion is in the mind of the user and not in the LLM itself.

    I agree with the author of the article but I’m curious if there is any well understood model of biological intelligence that we could use to say whether an artifical intelligence system has parallels to it or not.

    • RagnellOP
      link
      fedilink
      1
      edit-2
      1 year ago

      @matjoeman Well, we kinda do. Computers at their most basic circuit level use logic gates, and perform functions by doing mathematics. At the base, so that they can communicate even within microchips, they must be binary coded. Even if over that on/off there’s octal, hex, decimal, there must be a binary code at the core, two possibilities. We expand upon that by adding more paths, more logic gates, more complexity but the signal is a square. Two voltages.

      Even as a computer recreates a sound for a human’s ear, a sound that is a sine wave, it is still digitally encoded. Meaning it’s a complex string of bits. It’s two voltages that are being manipulated by logic gates to produce sound in a sine wave. But it’s two voltages.

      Human brains, however, are processing those sine waves, those complex frequencies that go across many voltages, as a spectrum. They aren’t boiling it down to two voltages, they aren’t basing it all on two voltages.

      I’m not saying we’ll NEVER get AI, but I think we need a revolutionarily different way of transferring information WITHIN the microchip to achieve that level of complexity.

      • matjoeman
        link
        fedilink
        01 year ago

        I don’t think there’s a fundamental reason why you couldn’t program AI digitally. Maybe there’s some high level reason why it needs analog processing but I doubt it.

        ML models use floating point numbers which approximate continuous values. An analog computer like you are describing could maybe speed up those calculations but it wouldn’t fix the fact that ML models just can’t be intelligent because of how they work (in my opinion).

        • RagnellOP
          link
          fedilink
          1
          edit-2
          1 year ago

          @matjoeman Maybe, but I don’t think we’re anywhere need the complexity yet and attempts to relate the way humans think to the way computers process aren’t useful.