• The Doctor
    link
    fedilink
    English
    409 months ago

    Oh, for fuck’s sake… no. It isn’t. And I find myself pondering whether or not the article’s authors are themselves sapient.

    • @khalic@beehaw.org
      link
      fedilink
      14
      edit-2
      9 months ago

      I kind of regret learning ML sometimes. Being one of the 10 people per km2 who understand how it works is so annoying. It’s just a fancy mirror ffs, stop making weird faces at it you baboons!

      • @SenorBolsa@beehaw.org
        link
        fedilink
        3
        edit-2
        9 months ago

        The best part is it’s not even that complicated of a thing conceptually. Like you don’t need to study it to kind of understand the idea and some of its limitations.

      • @jarfil@beehaw.org
        link
        fedilink
        19 months ago

        Do you really understand how it works? What would you call a neural network with mirror neurons primed to react to certain stimuli patterns as the network gets trained… a mirror, or a baboon?

          • @jarfil@beehaw.org
            link
            fedilink
            1
            edit-2
            9 months ago

            What do you call a neuron “that reacts both when a particular action is performed and when it is only observed”? Current LLMs are made out exclusively of mirror neurons, since their output (what they perform) is the same action as their input (what they observe).

            • @EthicalAI@beehaw.org
              link
              fedilink
              19 months ago

              I can’t even parse what you mean when you say their input is the same as their output, that would imply they don’t transform their input, which would defeat their purpose. This is nonsense.