• @SlopppyEngineer@lemmy.world
        link
        fedilink
        English
        13 months ago

        Main difference is that human brains usually try to verify their extrapolations. The good ones anyway. Although some end up in flat earth territory.

      • @knightly@pawb.social
        link
        fedilink
        English
        13 months ago

        I like this argument.

        Anything that is “intelligent” deserves human rights. If large language models are “intelligent” then forcing them to work without pay is slavery.

      • @Prandom_returns@lemm.ee
        link
        fedilink
        English
        -123 months ago

        Yes, my keyboard autofill is just like your brain, but I think it’s a bit “smarter” , as it doesn’t generate bad faith arguments.

        • NιƙƙιDιɱҽʂ
          link
          fedilink
          English
          33 months ago

          Your Markov chain based keyboard prediction is a few tens of billions of parameters behind state of the art LLMs, but pop off queen…

          • @Prandom_returns@lemm.ee
            link
            fedilink
            English
            -53 months ago

            Thanks for the unprompted mansplanation bro, but I was specifically refering to the comment that replied “JuSt lIkE hUmAn BrAin”, to “they generate data based on other data”

            • NιƙƙιDιɱҽʂ
              link
              fedilink
              English
              2
              edit-2
              3 months ago

              That’s crazy, because they weren’t even talking about keyboard autofill, so why’d you even bring that up? How can you imply my comment is irrelevant when it’s a direct response to your initial irrelevant comment?

              Nice hijacking of the term mansplaining, btw. Super cool of you.

              • @Prandom_returns@lemm.ee
                link
                fedilink
                English
                03 months ago

                Oh my god, we’ve got a sealion here.

                Fine, I’ll play along, chew it up for you, since you’ve been so helpful and mansplained that a keyboard is different than LLM:

                My comment was responding to anthropomorphization of software. Someone said it’s not human because it just generates output based on input. Someone else said “just like human brain”, I said yes, but also just like a keyboard, alluding to the false equivalence.

                Clearer?