• @Kaldo@beehaw.org
    link
    fedilink
    49 months ago

    I guess the point I have an issue with here is ‘ability to do things not specifically trained on’. LLMs are still doing just that, and often incorrectly - they basically just try to guess the next words based on a huge dataset they trained on. You can’t actually teach it anything new, or to put it better it can’t actually derive conclusions by itself and improve in such way - it is not actually intelligent, it’s just freakishly good at guessing.

    • @upstream@beehaw.org
      link
      fedilink
      29 months ago

      Heck, sometimes someone comes to me and asks if some system can solve something they just thought of. Sometimes, albeit very rarely, it just works perfectly, no code changes required.

      Not going to argue that my code is artificial intelligence, but huge AI models obviously has a higher odds of getting something random correct, just because it correlates.

    • @CanadaPlus
      link
      19 months ago

      You can’t actually teach it anything new, or to put it better it can’t actually derive conclusions by itself and improve in such way

      That is true, at least after training. They don’t have any long-term memory. Short term you can teach them simple games, though.

      Of course, this always goes into Chinese room territory. Is simply replicating intelligent behavior not enough to be equivalent to it? I like to remind people we’re just a chemical reaction ourselves, according to all our science.