• Veraticus
    link
    fedilink
    English
    0
    edit-2
    1 year ago

    We do understand how the math results in LLMs. Reread what I said. The neural network vectors and weights are too complicated to follow for an individual, and do not relate on a 1:1 mapping with the words or sentences the LLM was trained on or will output, so individuals cannot deduce the output of an LLM easily by studying its trained state. But we know exactly what they’re doing conceptually, and individually, and in aggregate. Read your own sources from your previous post, that’s what they’re telling you.

    Concepts are indeed abstract but LLMs have no concepts in them, simply vectors. The vectors do not represent concepts in anything close to the same way that your thoughts do. They are not 1:1 with objects, they are not a “thought,” and anyway there is nothing to “think” them. They are literally only word weights, transformed to text at the end of the generation process.

    Your concept of a chair is an abstract thought representation of a chair. An LLM has vectors that combine or decompose in some way to turn into the word “chair,” but are not a concept of a chair or an abstract representation of a chair. It is simply vectors and weights, unrelated to anything that actually exists.

    That is obviously totally different in kind to human thought and abstract concepts. It is just not that, and not even remotely similar.

    You say you are familiar with neural networks and AI but these are really basic underpinnings of those concepts that you are misunderstanding. Maybe you need to do more research here before asserting your experience?

    Edit: And in relation to your links – the vectors do not represent single words, but tokens, which indeed might be a whole word, but could just as well be part of a word or an entire phrase. Tokens do not represent the meaning of a word/partial word/phrase, just the statistical use of that word given the data the word was found in. Equating these vectors with human thoughts oversimplifies the complexities inherent in human cognition and misunderstands the limitations of LLMs.

    • @SirGolan
      link
      1
      edit-2
      1 year ago

      But we know exactly what they’re doing conceptually, and individually, and in aggregate.

      Can you define and give examples of what you mean at each level here? Maybe we’re just not understanding each other and mean the same thing.

      Read your own sources from your previous post, that’s what they’re telling you.

      The Anthropic one is saying they think they have a way to figure it out, but it hasn’t been tested on large models. This is their last paragraph:

      Our next challenge is to scale this approach up from the small model we demonstrate success on to frontier models which are many times larger and substantially more complicated. For the first time, we feel that the next primary obstacle to interpreting large language models is engineering rather than science.

      They are literally only able to do this on a small one layer transformer model. GPT 3 has 96 layers and 175 billion parameters.

      Also, in their linked paper:

      A key challenge to our agenda of reverse engineering neural networks is the curse of dimensionality: as we study ever-larger models, the volume of the latent space representing the model’s internal state that we need to interpret grows exponentially. We do not currently see a way to understand, search or enumerate such a space unless it can be decomposed into independent components, each of which we can understand on its own.

      Under the Future Work heading:

      Scaling the application of sparse autoencoders to frontier models strikes us as one of the most important questions going forward. We’re quite hopeful that these or similar methods will work – Cunningham et al.'s work [17] seems to suggest this approach can work on somewhat larger models, and we have preliminary results that point in the same direction. However, there are significant computational challenges to be overcome.

      How are you getting from that that this is a solved problem?

      Concepts are indeed abstract but LLMs have no concepts in them, simply vectors. The vectors do not represent concepts in anything close to the same way that your thoughts do. They are not 1:1 with objects, they are not a “thought,” and anyway there is nothing to “think” them. They are literally only word weights, transformed to text at the end of the generation process.

      Again, you aren’t making sense here. Word/sentence vectors are literally a way to represent the concept of those words/sentences. That’s what they were built for. That’s how they are described. Let’s take a step back to try to understand each other.

      Are you trying to say that only human minds can understand concepts? I don’t buy the human brains are magic bit, and neither does our current understanding of physics. Are you assuming I’m saying that LLMs are sentient, conscious, have thoughts or similar? I’m not. Jury’s out on the thought thing, but I certainly don’t believe the other two things. There’s no magic with them, same with human brains. We just don’t fully understand what happens inside either. Anthropic in the work I quoted is making good progress at that, and I think they may be pretty close, but in terms of LLMs (and not Small LMs), they are still a black box. We know the math behind them, the software, etc. We have some theories. We still do not understand. If you can prove otherwise, please provide me with a source. Stuff is happening really fast in AI, and maybe I blinked and missed something.

      I think you’re maybe having a hard time with using numbers to represent concepts. While a lot less abstract, we do this all the time in geometry. ((0, 0), (10, 0), (10, 10), (0, 10), (0, 0)) What’s that? It’s a square. Word vectors work differently but have the same outcome (albeit in a more abstract way).

      the vectors do not represent single words, but tokens

      I was talking word vectors where the vectors DO represent words. It’s in the name. LLMs don’t specifically use word vectors, but the embeddings they do use work similarly.

      Tokens do not represent the meaning of a word/partial word/phrase, just the statistical use of that word given the data the word was found in.

      You are correct tokens don’t represent the meaning of a word. However, tokens are scalars. You are conflating tokens and embeddings / word vectors here. Tokens are used to simplify converting a string into a format a neural network can understand (a vector). If we used each ascii character in the input/output string as a vector input to the network, we’d have to have a lot more parameters than if we combine the characters in some way (i.e. tokens). As you said, they can be a word or a part of a word. There’s no statistics embedded with the tokens (there are some methods of using statistics to choose what tokens to use, but that’s decided before even training the model and can not ever change [with our current approach]). You can read here for more information on tokens. Or you can play around with the gpt3 tokenizer.

      Your concept of a chair is an abstract thought representation of a chair. An LLM has vectors that combine or decompose in some way to turn into the word “chair,” but are not a concept of a chair or an abstract representation of a chair. It is simply vectors and weights, unrelated to anything that actually exists.

      If you know Python, you should grab nltk and experiment with gensim, their word vectors.

      model.most_similar(positive=[‘woman’,‘king’], negative=[‘man’], topn = 1) [(‘queen’, 0.71181…)]

      king + woman - man = queen

      Seems like an abstract representation of those things as concepts using math. For the record, word vectors are actually pretty understandable/understood by people because you can visualize them easily. When you do, you find similar concepts clustered together (this is how vector search works except with text embeddings). Anyway, it just really seems like linking numbers to concepts is not clicking with you, or you somehow think it’s not possible. Reading up on computational linguistics might help.

      That is obviously totally different in kind to human thought and abstract concepts. It is just not that, and not even remotely similar.

      Yes, neural networks (although initially built thinking they were a computer version of a neuron), are a lot different from how actual brains work as we’ve learned in however many decades since they were invented. If you’re saying that intelligence and understanding is limited to the human mind, then please point to some non-religious literature that backs up your assertion.

      You say you are familiar with neural networks and AI but these are really basic underpinnings of those concepts that you are misunderstanding. Maybe you need to do more research here before asserting your experience?

      I’m pretty confident in my understanding, though I’m always open to new ideas that are backed with peer reviewed research. I’m not going to get into a dick waving contest here, so I guess we’ll have to agree to disagree.

      As a side note, going back to your definition of intelligence. That was for psychology. I’ll note that the Wikipedia page for Intelligence has this to say:

      The definition of intelligence is controversial, varying in what its abilities are and whether or not it is quantifiable.

      And so I’ll reiterate that we don’t have a good definition of intelligence.

      • Veraticus
        link
        fedilink
        English
        11 year ago

        The Anthropic one is saying they think they have a way to figure it out, but it hasn’t been tested on large models. This is their last paragraph:

        Again, all your quotes indicate that what they’ve figured out is a way to inspect the interior state of models and transform the vector space into something humans can understand without analyzing the output.

        I think your confusion is you believe that because we don’t know what the vector space is on the inside, we don’t know how AI works. But we actually do know how it accomplishes what it accomplishes. Simply because its interior is a black box doesn’t mean we don’t understand how we built that black box, or how it operates and functions.

        For an overview of how many different kinds of LLMs function, here’s a good paper: https://arxiv.org/pdf/2307.06435.pdf You’ll note that nowhere is there any confusion about the process of how they generate input or produce output. It is all extremely well-understood. You are correct that we cannot interrogate their internals, but that is also not what I mean, at least, when I say that we can understand them and how they work.

        I also can’t inspect the electrons moving through my computer’s CPU. Does that mean we don’t understand how computers work? Is there intelligence in there?

        I think you’re maybe having a hard time with using numbers to represent concepts. While a lot less abstract, we do this all the time in geometry. ((0, 0), (10, 0), (10, 10), (0, 10), (0, 0)) What’s that? It’s a square. Word vectors work differently but have the same outcome (albeit in a more abstract way).

        No, that is not my main objection. It is your anthropomorphization of data and LLMs – your claim that they “have intelligence.” From your initial post:

        But also, can you define what intelligence is? Are you sure it isn’t whatever LLMs are doing under the hood, deep in hidden layers?

        I think you’re getting caught up in trying to define what intelligence is; but I am simply stating what it is not. It is not a complex statistical model with no self-awareness, no semantic understanding, no ability to learn, no emotional or ethical dimensionality, no qualia…

        ((0, 0), (10, 0), (10, 10), (0, 10), (0, 0)) is a square to humans. This is the crux of the problem: it is not a “square” to a computer because a “square” is a human classification. Your thoughts about squares are not just more robust than GPT’s, they are a different kind of thing altogether. For GPT, a square is a token that it has been trained to use in a context-appropriate manner with no idea of what it represents. It lacks semantic understanding of squares. As do all computers.

        If you’re saying that intelligence and understanding is limited to the human mind, then please point to some non-religious literature that backs up your assertion.

        I’m disappointed that you’re asking me to prove a negative. The burden of proof is on you to show that GPT4 is actually intelligent. I don’t believe intelligence and understanding are for humans only; animals clearly show it too. But GPT4 does not.

        • @SirGolan
          link
          1
          edit-2
          1 year ago

          Simply because its interior is a black box doesn’t mean we don’t understand how we built that black box, or how it operates and functions.

          Wait a sec. I think we’re saying the same thing here. I guess depending on what you mean by how it operates and functions. I’ve said multiple times we understand the math and the code. We understand how values propagate through it because again, that’s all the math and code people wrote. What we don’t understand is how it uses that math and code to actually do thinks that seem intelligent (putting aside the point of whether it is or is not intelligent). If that’s what you’re arguing then great, we’re on the same page!

          I also can’t inspect the electrons moving through my computer’s CPU. Does that mean we don’t understand how computers work? Is there intelligence in there?

          Well, I don’t have the equipment to look at electrons either (I don’t think that tech exists), but I can take a logic probe and get some information that I could probably understand, or someone who designs CPUs could look at the gates and whatever and tell you what they did and how they relate to whatever higher level operations. You’re bringing up something completely different here. Computers are not a black box at all. LLMs are-- you just said that yourself.

          No, that is not my main objection. It is your anthropomorphization of data and LLMs

          I’m not anthropomorphisizing them. What are you talking about? I keep saying they don’t work like human brains. I just said I don’t think they’re sentient or conscious. I said they don’t have agency.

          I think you’re getting caught up in trying to define what intelligence is; but I am simply stating what it is not.

          How do you know what it’s not if we can’t define what it is?

          It is not a complex statistical model with no self-awareness, no semantic understanding, no ability to learn, no emotional or ethical dimensionality, no qualia…

          Jury’s still out on whether human brains are complex statistical models. I mean (from here)…

          Our brains have learned, through evolution and experience, the statistical properties of our natural environments and exploit this knowledge when performing perceptual tasks.

          I don’t make any claim to understanding neuroscience, and I don’t think that article is saying for sure we know that.

          Anyway, in-context learning is a thing for LLMs. Maybe one day we’ll figure out how to have them adjust their weights after training, but that’s not happening now (well people are experimenting with it).

          New research is showing they do have semantic understanding.

          They don’t by themselves have self-awareness, but a software framework built up around them can generally do that to some extent.

          They do understand emotions and ethics. Someone built a fun GPTrolley web site a while ago. I think it died pretty quickly because it was too expensive for them, but it had GPT 3(?) answering Trolley Problem questions. It did (in my memory of it) like to save any “AGI” on one track over humans, which was amusing. They don’t have emotions, no. Does something have to have emotions to be intelligent?

          And no, I’ve said all along they aren’t conscious, so no qualia. Again, is that required for intelligence?

          This is the crux of the problem: it is not a “square” to a computer because a “square” is a human classification. Your thoughts about squares are not just more robust than GPT’s, they are a different kind of thing altogether. For GPT, a square is a token that it has been trained to use in a context-appropriate manner with no idea of what it represents. It lacks semantic understanding of squares. As do all computers.

          No. A square to GPTs is not just a token. It’s associated with some meaning. I’m not going to re-hash embedding and word vectors and whatever since I feel like I’ve explained that to death.

          If you’re saying that intelligence and understanding is limited to the human mind, then please point to some non-religious literature that backs up your assertion.

          I’m disappointed that you’re asking me to prove a negative.

          I’m literally not. “Intelligence is limited to the human mind” is not a negative.

          The burden of proof is on you to show that GPT4 is actually intelligent. I don’t believe intelligence and understanding are for humans only; animals clearly show it too. But GPT4 does not.

          I feel like I’ve laid out my argument for that mostly through the Microsoft and Max Tegmark papers. Are you saying intelligence is only the domain of biological life?

          Here’s a question-- are you conflating “intelligence” with “general intelligence” like AGI? I find a lot of people think “AI” means “AGI.” It doesn’t help that some people do say those things interchangeably. I was just reading a recent argument between Yann LeCun and Yoshua Bengio and they were both totally doing that. Anyway, I don’t at all believe GPT4 is AGI or that LLMs could even be AGI.

          For an overview of how many different kinds of LLMs function, here’s a good paper: https://arxiv.org/pdf/2307.06435.pdf

          Looks like a great paper-- I hadn’t seen it yet. I know how LLMs are constructed (generally-- while I could go and write some code for a multi-layer neural network with back propagation without looking anything up, I couldn’t do that for an LLM without looking at a diagram of the layers or whatnot).

    • @BitSound@lemmy.world
      link
      fedilink
      01 year ago

      Your concept of a chair is an abstract thought representation of a chair. An LLM has vectors that combine or decompose in some way to turn into the word “chair,” but are not a concept of a chair or an abstract representation of a chair. It is simply vectors and weights, unrelated to anything that actually exists.

      Just so incredibly wrong. Fortunately, I’ll have save myself time arguing with such a misunderstanding. GPT-4 is here to help:

      This reads like a misunderstanding of how LLMs (like GPT) work. Saying an LLM’s understanding is “simply vectors and weights” is like saying our brain’s understanding is just “neurons and synapses”. Both systems are trying to capture patterns in data. The LLM does have a representation of a chair, but it’s in its own encoded form, much like our neurons have encoded representations of concepts. Oversimplifying and saying it’s unrelated to anything that actually exists misses the point of how pattern recognition and information encoding works in both machines and humans.

      • Veraticus
        link
        fedilink
        English
        01 year ago

        Are you kidding me? I sourced GPT4 itself disagreeing with you that it is intelligent and you told me it’s lying. And here you are, using it to try to reinforce your point? Are you for real or is this some kind of complicated game?

          • Veraticus
            link
            fedilink
            English
            -1
            edit-2
            1 year ago

            Here, let’s ask GPT4 itself since you’ve decided it’s suddenly an okay source:

            Your statement is correct in asserting that the vector representation in a language model is not an abstract representation. It’s purely a mathematical construct. However, saying it’s “unrelated to anything that actually exists” might be an overstatement. These vectors do capture statistical patterns in human language, which are reflections of human thought and culture. They’re just not capable of the deep, nuanced understanding that comes from human experience.

            I accept it’s an overstatement. But it is neither “incredibly wrong,” nor is it thought. (Or intelligence.)

            • @SirGolan
              link
              11 year ago

              I’d just like to step in here and mention that asking an LLM is probably not a good proof (and this is directed at both of you). Its understanding of AI is from before it was trained, so it is wildly out of date at this point given how much has happened in the space since.

              • Veraticus
                link
                fedilink
                English
                11 year ago

                GPT4 has knowledge of its own training since it was trained in 2022.

                • @SirGolan
                  link
                  1
                  edit-2
                  1 year ago

                  Care to provide some proof of that? They did update their system prompt to include a few things like it is now GPT4 (it used to always say GPT3). Other than that, I don’t think it knows anything. But in general, I was more talking about developments in AI since it was trained which it certainly does not know.

                  Edit: hmm I just reviewed our discussion and I note you only provided one link which was to the psychological definition of intelligence. You otherwise are providing no sources to back up your claims while my responses are full of them. Please start backing up your assertions, or provide some evidence you are an expert in the field.

              • Veraticus
                link
                fedilink
                English
                -11 year ago

                I was in this case – but the overall point I made is still correct. If winning this minor battle is what you were seeking, congratulations. You are no closer to understanding the truth of this or what we were actually talking about. Not that that was either your point or within your capabilities.