From his website stallman.org:

Richard Stallman has cancer. Fortunately it is slow-growing and manageable follicular lymphona, so he will probably live many more years nonetheless. But he now has to be even more careful not to catch Covid-19.

Recent video of him speaking at GNU 40 Hacker Meeting. Screenshots of video stream.

  • @lemmesay@discuss.tchncs.de
    link
    fedilink
    89 months ago

    GPT, for example, fails in calculation with problems like knapsack, adjacency matrix, Huffman tree, etc.

    it starts giving garbled output.

      • @lloram239@feddit.de
        link
        fedilink
        39 months ago

        The current LLMs can’t loop and can’t see individual digits, so their failure at seemingly simple math problems is not terrible surprising. For some problems it can help to rephrase the question in such a way that the LLM goes through the individual steps of the calculation, instead of telling you the result directly.

        And more generally, LLMs aren’t exactly the best way to do math anyway. Human’s aren’t any good at it either, that’s why we invented calculators, which can do the same task with a lot less computing power and a lot more reliably. LLMs that can interact with external systems are already available behind paywall.

          • Preston Maness ☭
            link
            fedilink
            English
            49 months ago

            The problem is chatgpt will say you the wrong answer confidently unlike humans

            We must be hanging around different humans.

          • @lloram239@feddit.de
            link
            fedilink
            4
            edit-2
            9 months ago

            Humans are wrong all the time and confidently so. And it’s an apples and oranges competition anyway, as ChatGPT has to cover essentially all human knowledge, while a single human only knows a tiny subset of it. Nobody expects a human to know everything ChatGPT knows in the first place. A human put into ChatGPTs place would not perform well at all.

            Humans make the mistake that they overestimate their own capabilities because they can find mistakes the AI makes, when they themselves wouldn’t be able to perform any better, at best they’d make different mistakes.

            • @mexicancartel@lemmy.dbzer0.com
              link
              fedilink
              English
              19 months ago

              So same way it may not be able to code if it can’t do math. All i see it having is profound english knowledge, and the data inputted.

              Human knowledge is limited, i agree. But more knowledge is different from the ability to so called ‘think’. Maybe it can be done with a different type of neural network and usage of logical gates seperate from the neural networks