It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it’s bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren’t even tools. They’re just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

  • AdamBomb
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    23 hours ago

    Yeah, don’t generate code with it. Treat it like StackOverflow. It does pretty good at that.

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      21 hours ago

      This is the only way I use it, and I do it grudgingly only because AI has ironically also ruined the web and web search. It’s also a last resort for when Kagi isn’t helping.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      Unfortunately for me it’s a kpi so I need to figure out how to do something useful with it.

      LLM is good for

      • temporary scripts like to export data
      • boilerplate for new code
      • simple or repetitious code like unit tests

      But just in time for my performance review, I spent a week ignoring my work to set and tweak rule sets. Now it can be noticeably more useful

      • set context so it understands your code better. No more stupid results like switching languages, making up a new test framework, or randomly use a different mocking tool
      • create actions. I’m very happy with a code refactoring ruleset I created. It successfully finds refactoring opportunities (matches cyclonatic complexity hotspots) and recommends approaches and is really good at presenting recommendations so I can understand and accept or reject. I tweaked it until it no longer suggests stupid crap, although I really haven’t been able to use much of the code it tries.
      • establish workflow. Still in progress but a ruleset to understand how we use our ticketing system, conventions for commit messages , etc. if I can get it to the point of trusting it, it should automate some of the source control actions and work tracking actions
      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        Just literally make something up and get it to lie about something. This is literally the land of make belief at this point, all this KPI shit. Don’t stress about it. Execs want slop, give em slop.

      • AdamBomb
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        I agree with all that, especially if your performance is being measured by your use of LLMs. Those are cases where I find the code generation to be ok and doesn’t create comprehension debt.