Avram Piltch is the editor in chief of Tom’s Hardware, and he’s written a thoroughly researched article breaking down the promises and failures of LLM AIs.

  • @CanadaPlus
    link
    1
    edit-2
    10 months ago

    If you invite an LLM to complete the sentence "I’m going to walk my ", one of the things it’s likely to tack on at the end is “dog”, but that isn’t because it understands that a “dog” is a mammalian quadruped often kept as a pet that requires exercise at intervals

    Actually, if you asked it, it would probably be able to formulate an explanation of that. It would do so as a form of text prediction, but the output would be original and correct anyway. How that’s different from a person answering you correctly so you’ll like them and won’t club them for mammoth meat is all philosophy.

    What would it take for you to conclude that an LLM does understand meaning? Would it have to have a meaning subroutine written explicitly into it? How do you know there isn’t one, just in a form we can’t recognise, just as it’s so hard to see thoughts in our pink goo? You have an intuition here, and that’s valid, but the world is often unintuitive, and I’d urge you to suspend final judgement until we have things more nailed down.

    Note that word “intrinsic”. It’s important. If I show a giraffe to a human who’s never seen one before, they may not have a word for it, but they can still determine things about it: it’s an animal, it has four legs, it has a long neck, it’s yellow-brown with darker spots (with only rare exceptions).

    That depends on the human though, doesn’t it? If it was a blind person, they could only understand it through it’s calls and it’s stink; it probably would be too dangerous and skittish to touch even. To really explain it, you’d still have to use language, and yet I think a blind person can understand a giraffe just fine.

    This isn’t the first time it’s come down to how powerful language is on here. That seems to be the main point of divergence between skeptics and the more believer-ish camp.

    • @nyan@lemmy.cafe
      link
      fedilink
      English
      110 months ago

      Going over it, I think that sentience (not necessarily sapience, but that would be nice) is the secret sauce. In order for me to accept that an AI knows something (as opposed to posessing data which it does not actually understand), it has to demonstrate awareness.

      So how can a text-based AI demonstrate awareness, given the constraints of the interface through which it must operate? Reliably generalizing from data not immediately part of the response to the current prompt might do it. Or demonstrating that it understands the consequences of its actions in the real world. Even just indicating that it knows when it’s making things up would be a good start.

      For instance, take the case of the ChatGPT-generated fake legal citations. An AI which would have been fed masses of information relating to law (I’d expect that to include law school textbooks, from archive.org if nowhere else) demonstrated very clearly that it did not know that making up legal cases in response to a factual query was a Very Bad Idea. It did not generalize from data outside the domain of lists of case names that would have told it not to do that, or provide any indication that it knew its actions could be harmful. That AI had data, but not knowledge.

      So we’re back to connections and conceptual models of the world again.

      • @CanadaPlus
        link
        1
        edit-2
        10 months ago

        An AI which would have been fed masses of information relating to law (I’d expect that to include law school textbooks, from archive.org if nowhere else) demonstrated very clearly that it did not know that making up legal cases in response to a factual query was a Very Bad Idea. It did not generalize from data outside the domain of lists of case names that would have told it not to do that, or provide any indication that it knew its actions could be harmful.

        I mean, was it a bad idea? For the lawyer sure, but ChatGPT was not penalised by it’s own cost function. It may well have known in some way that is was just guessing, and that generally a legal document is serious business, but it doesn’t have any reason to care unless we build one in. Alignment is a whole other dimension to intelligence.

        Reliably generalizing from data not immediately part of the response to the current prompt might do it. Or demonstrating that it understands the consequences of its actions in the real world.

        It sounds like the biggest models do this reasonably well. Commonsense reasoning would count, right?

        • @nyan@lemmy.cafe
          link
          fedilink
          English
          210 months ago

          I think I’m going to bow out of this conversation, on the grounds that I doubt either of us is going to persuade the other, which makes it pointless.

          • @CanadaPlus
            link
            110 months ago

            Alright, that’s fair. We’ll watch what happens next, it was a pleasure, honestly.