• @CanadaPlus
    link
    59 months ago

    I wonder how many people here actually looked at the article. They’re arguing that ability to do things not specifically trained on is a more natural benchmark of the transition from traditional algorithm to intelligence than human-level performance. Honestly, it’s an interesting point; aliens would not be using human-level performance as a benchmark so it must be subjective to us.

    • @Kaldo@beehaw.org
      link
      fedilink
      49 months ago

      I guess the point I have an issue with here is ‘ability to do things not specifically trained on’. LLMs are still doing just that, and often incorrectly - they basically just try to guess the next words based on a huge dataset they trained on. You can’t actually teach it anything new, or to put it better it can’t actually derive conclusions by itself and improve in such way - it is not actually intelligent, it’s just freakishly good at guessing.

      • @upstream@beehaw.org
        link
        fedilink
        29 months ago

        Heck, sometimes someone comes to me and asks if some system can solve something they just thought of. Sometimes, albeit very rarely, it just works perfectly, no code changes required.

        Not going to argue that my code is artificial intelligence, but huge AI models obviously has a higher odds of getting something random correct, just because it correlates.

      • @CanadaPlus
        link
        19 months ago

        You can’t actually teach it anything new, or to put it better it can’t actually derive conclusions by itself and improve in such way

        That is true, at least after training. They don’t have any long-term memory. Short term you can teach them simple games, though.

        Of course, this always goes into Chinese room territory. Is simply replicating intelligent behavior not enough to be equivalent to it? I like to remind people we’re just a chemical reaction ourselves, according to all our science.