• Vincent@feddit.nl
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      12 hours ago

      If these tools are indeed finding security issues, then ignoring them means someone else will find those issues - and abuse them.

      • artyom@piefed.social
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        edit-2
        12 hours ago

        Doesn’t matter if they find security issues (they won’t) if they’re buried in a veritable haystack of false reports.

        • Vincent@feddit.nl
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          9 hours ago

          That’s true. If they’re not, though, or if they’re easy to generate yourself, then you are kinda forced to pay attention though, if you care about the security of your project.

          I don’t have the expertise or experience to say whether that is true. But GregKH seems to think so, and other prolific projects seem to be coming to the same conclusions.

          I get that it’s attractive to think that AI isn’t capable of it. But it’s important that what you believe to be true is, and stays, based on reality rather than on what I wish is true. And it’s especially important to be wary of when you really want something to be true.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            I get that it’s attractive to think that AI isn’t capable of it

            LOL you think this is just what I want to believe? Quite the opposite, I assure you.

            But GregKH seems to think so, and other prolific projects seem to be coming to the same conclusions.

            Lots of people are deluded, and subject to mental manipulation, unable to understand what’s happening in front of them. Falling prey to powerful marketing with unlimited budgets. Ever heard of “AI psychosis”?

      • communism@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        11 hours ago

        The reasons people use Linux are for qualities other than the ones affected by AI use. AI use has implications for code quality, correctness, and security. But none of those are why people use Linux. People use Linux over BSD or other Unixes because Linux supports the most hardware, has the biggest software ecosystem, and being a monolithic kernel is much easier to get up and running with lots of hardware without needing to install separate drivers. Those qualities still need to be addressed by BSDs or whatever alternatives before people will start migrating from Linux.

        I say this as someone who regularly uses and enjoys an OpenBSD machine. I couldn’t use it as my main machine because it just doesn’t have the same software availability and plug-and-use hardware support as Linux. Porting software to a new target is not a trivial task for most users. I package a few things for the AUR and that’s much easier as the software already supports x86_64 Linux; I just have to write a script to install it. I think OpenBSD is a nice OS but I highly doubt Linux users will migrate any time soon. Think about how many people were clinging onto X11 because Wayland didn’t support their super specific workflow. And a migration to an entirely different OS would be worse.

        • hexagonwin@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          10 hours ago

          i usually use freebsd (haven’t tried openbsd yet…) and its linux binary compat is almost perfect, it surprisingly just works for most things although there are some rough edges as a desktop.

          • communism@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            8 hours ago

            How’s the firmware support/availability? For things like graphics tablets, graphics drivers, etc?

            I don’t think OpenBSD has binary compat with Linux but most Linux software should just need a recompile for BSDs—I’m discouraged from porting given that when it’s not a simple recompile I’d have much less idea what to do.

      • artyom@piefed.social
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        4
        ·
        18 hours ago

        Unfortunately AI has gotten ahold of several projects

        Why does that matter?

        • TacoSocks@infosec.pub
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          3
          ·
          11 hours ago
          • Legal Ramifications
          • Legal Cases And Law Problems
          • License Problems
          • Stolen Training Data
          • Environmental Impact
          • Labor
          • Poor Code Quality
          • Deskilling
          • Infosec risks
          • Healthy and Safety
          • Ties to the War Industrial Complex
          • Effects on Policing
          • Maintainer Fatigue
          • Effect on Hardware Prices

          This website linked in the post you replied to lists a bunch of reasons.

            • TacoSocks@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              When you say ignore AI, do you mean stick your head in the sand and hope it goes away or actively avoid interacting with AI and AI based projects?

              • artyom@piefed.social
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                3 hours ago

                I don’t expect it to “go away”. It was around long before ChatGPT and it’ll be around long after it. I’m just crossing my fingers that the bubble pops sooner rather than later so we can just go back to it being just a nonsense marketing term instead of the thing people are talking about every 5 seconds, for whatever reason.

                But besides, that, both. Ive been yelled at that AI is the second coming of Jesus nonstop for the last 4 years. And every time I’ve given it an ounce of consideration, it massively disappoints. And so now I want to stick my head in the sand until those people STFU.

                I think the whole market hinges on what is essentially a “false dawn”. People think “we’re so close” and some innovation is right around the corner that’s going to suddenly make it useful, but I don’t think its coming. I think we’re probably 20+ years away.

          • m532@lemmygrad.ml
            link
            fedilink
            arrow-up
            2
            ·
            11 hours ago

            “Stolen”

            Who has had their stuff taken away from, in a way that they don’t have it anymore?

            “But copying is actually exactly like plundering a whole ship” - RIAA

            Copyright crusaders have always been pathetic bootlickers of capitalist middlemen parasites who enclose the commons and then demand ransom.

        • lambalicious
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 hours ago

          It reduces the foothold available for AI-free projects, in particular once “big enough” projects like Firefox or Linux get infected. Since there is significant inertia to switching to, or even developing, an alternative (a web browser might have been casual dev in 1998; right now it almost requires a Corporation to coast the development). Also it normalizes the idea of having AI in development, which is in itself dangerous.

      • misk@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        14 hours ago

        They list iTerm2 as affected but list Linux-specific terminal emulators only as replacement even if there are plenty of those on MacOS. At this point I think those lists are prepared by LLM boosters too.

    • fruitycoder@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      4
      ·
      18 hours ago

      Because IF it is superuseful tool and you are being paid to dev then you will have to explain why. Like if a framer showed up to a construction site and refused to use power tools

      • artyom@piefed.social
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        4
        ·
        18 hours ago

        But its not. This is more like a framer showed up and you told him to go home so the power tools could build a house that looks like the fucking tower of Pisa.

        • terabyterex@lemmy.world
          link
          fedilink
          arrow-up
          14
          arrow-down
          5
          ·
          17 hours ago

          this is noy how devs are using ai. they use it as a tool…

          non devs may be using ai this way and the house falls apart.

          • artyom@piefed.social
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            14
            ·
            17 hours ago

            No they don’t. AI is not a tool. A tradespersons wields a tool. AI just has them point it in a general direction and then it does it “for” them, but also fills it with shitty bugs they either have to go back and remove, which ends up taking even more time, or they miss it entirely, which leads to broken code, or they just ship it and don’t give a fuck.

  • turdas@suppo.fi
    link
    fedilink
    arrow-up
    47
    arrow-down
    2
    ·
    20 hours ago

    Getting the hottest new Linux reporting from *checks address bar* PC Gamer?

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      34
      ·
      19 hours ago

      PC Gamer is just reporting on the original story from the Register, and the quote is real from the maintainer of the stable branch.

  • Sims@lemmy.ml
    link
    fedilink
    arrow-up
    14
    arrow-down
    42
    ·
    20 hours ago

    Seem to me that peeps outside of the AI development sphere/interest are not aware of how quickly ‘flaws’ gets fixed. There are still people that don’t think AI will ever be useful - or intelligent - based on some ‘archaic’ performance from many months ago. Reality will hit hard I think.

    Personally, I have never seen any development move faster than artificial intelligence, and whatever it can’t do ‘properly’ today, it can do tomorrow or the day after.

    Current AI/Agentic status is the clawd family of frameworks + a sota model. However, they are really stupid architectures (Every 30 minutes, the llm is yanked back and presented with the original tasks in an md file - that’s it) and are WAY behind what we can do according to papers/newest development. Papers quickly trickles down to architectures tho, and the next family of agentic frameworks will strike as fast as the clawd phenomenon.

    We are not far from general AI - not particularly from llms/transformers, but from the external cognitive ‘harness’ that are build all over. While the harness adds cognitive states to the architecture, many of the typical agentic features are being build into the model itself, so the the cognitive functionality of the harness, are being injected into the models, and the new harness fixes other ‘flaws’. We will see one clawd moment after another, faster and faster, getting better and better…

    I hope peeps live in a society that takes care of each other, and don’t treat each other as lazy bums that “just wouldn’t work hard enough”. It’s going to be horrible to peeps in US and similar Capitalist ‘might is right’ societies. There is NO safety net for ‘failure’ there.

    Back to article: It was bound to happen within a year or so.

    • NewOldGuard@lemmy.ml
      link
      fedilink
      English
      arrow-up
      25
      ·
      19 hours ago

      The still can’t pass the basic tests people cooked up years ago lmao. All these companies do is optimize for benchmarks and overfit for the most egregious shortcomings. The fundamental limitations of a neural net remain. Also just asked gpt5 how many Ls are in mammalian, it said 2 lol

      Source: guy who cleans up coworker’s slop code as a significant portion of their day job

    • theunknownmuncher@lemmy.world
      link
      fedilink
      arrow-up
      12
      arrow-down
      2
      ·
      edit-2
      16 hours ago

      Personally, I have never seen any development move faster than artificial intelligence

      You must not be very familiar, then. We’ve been on diminishing returns and a plateau for several years now with no major leaps in performance, no potential answers to the flaws in LLMs, and no AI company securing a real lead over any others, or even a profitable business model for AI.

      There have been a lot of inference-level tricks, like CoT to maintain coherence, MoE to make inference more cost effective, and techniques to extend the context windows; but literally no groundbreaking or foundational changes to the transformer architecture. At all. And they’re still hitting the same performance and scaling constraints.

      We’re basically stagnant, throwing more training tokens at models, but not getting any significant gains back from it anymore.

      No, we are nowhere near AGI and do not even know where to begin making gains towards it. The fundamentally different agentic framework and “cognitive harness” you describe are quite literally fantasy delusions that don’t exist… did an LLM tell you about them? https://en.wikipedia.org/wiki/Chatbot_psychosis

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      19
      ·
      19 hours ago

      Not that I don’t believe you but do you have a source for any of that? Specifically about the development timeline of the “cognitive harness”?