It has to be pure ignorance.

I only have used my works stupid llm tool a few times (hey, I have to give it a chance and actually try it before I form opinions)

Holy shit it’s bad. Every single time I use it I waste hours. Even simple tasks, it gets details wrong. I correct it constantly. Then I come back a couple months later, open the same module to do the same task, it gets it wrong again.

These aren’t even tools. They’re just shit. An idiot intern is better.

Its so angering people think this trash is good. Get ready for a lot of buildings and bridges to collapse because of young engineers trusting a slop machine to be accurate on details. We will look back on this as the worst era in computing.

  • utopiah@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    An idiot intern is better.

    Well, 100% because the intern WILL eventually learn. That’s the entire difference. It won’t be about adjusting the prompt, or add yet another layer of “reasoning”, or wait for the next “version” with a different code name an .1% larger dataset. No, you’ll point to the intern they did a mistake, try not calling them an idiot, explain WHY it’s wrong, optionally explain how to do it right, THEN the next time they’ll avoid it or fix it after.

    That’s the entire point of having an intern : initially they suck BUT as you train them, they don’t! Meanwhile an LLM, despite technical jargon hijacked by the marketing department, they don’t “learn” (from machine learning) or train (from “training dataset”) or have “neurons” (from “artificial neural networks”) rather it’s just statistics on the next most probable world, sounding right with 0 “reasoning”.

  • Lutra@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 hours ago

    I think the intern comparison fits. The root of the problem is that AI can very good at the thing is is good at. That leads humans to believe that it is good at other things. This is often untrue.

    Often the things it is good at are in the set of ‘problems machines are good at’. Most professionals, people who are trained/experienced in their field face problem’s that are NOT in that set. They are skilled, experienced problem solvers, who are solving difficult, real world problems. Not generic workers, or human resources.

    The belief at the top is often that this machine which is ‘so impressive’, must therefore be good at everything. And this gets pushed down. Where people experience that same truth. The machine is incredibly good at the things it’s good at, but it sucks doing what they do.

    paraphrasing my grandpa - “To a suit with hammer, everything looks like a nail”

  • douglasg14b@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    8
    ·
    14 hours ago

    I know this community is all about fuck AI, but this is just straight echo chambering.

    But honestly your post sounds like you’re just not using it right? You can get pretty good results with it with enough guardrails. Just because you can’t get the results you want doesn’t mean that no one can.

    That said, fuck AI. It’s all a bunch of bullshit, but denying real results just means you’re sticking your head in the sand and that’s not how you fix this problem.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Or that it’s not right for their use case.

      Like someone throwing a bunch of data into an LLM and trying to use it to process it into a chart or something. It can work, but it was never designed to be used in that manner.

      I’ve got an acquaintance who does that, despite the fact that python would be a better thing to use.

      Personally, I sometimes run a few saved images thorough a multi-modal 8 gigaparameter local model on my computer, so I can automate giving them more descriptive names than randomnumbers.png, and that seems to work fine. I could do it by hand, but it would take hours and days, compared to minutes, and since it’s not too important, it doesn’t matter if it’s wrong. The resource usage is also less of an issue, since it’s my own computer.

      • arbitrary_sarcasm@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        For a research project, I had to convert 20+ projects from a dataset into a new format. The old format was simply a single script for each project that builds it. But I needed a format with a Docker file and a script. It would’ve taken me around a week to do all that one by one.

        I got Claude to do it in 2 hours.

        I know people hate AI in this community, but to say it doesn’t do anything good or to insult all people who use it is just pure negativity.

    • cooligula@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      1 hour ago

      I agree… Saying LLMs are good at nothing is just plain ignorance… One can disagree with the philosophy or dislike hallucinations, but they are definitely good at some things.

      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 hours ago

        It’s basically like Google with a bit more detail in my experience. Everytime I’ve tried to use it in a professional context, I’ve come up massively empty. Pages and pages and pages and pages of just absolutely walls of text, but nothing actually useful. I mean I’ve got it to calculate stuff and whatever, but then you examine something and its not coming up for you like the LLM says it should be. Which pretty much immediately means you have to validate everything else, and then it’s like well hey look here I am however many hours later, manually doing something.

        Our executives keep telling us to adapt or we’ll be on the losing end. At this point, I’d just like the check please. Because if the company can survive on images of Super Mario committing 9/11, or walls of useless text or just straight up make belief, that’s something I’d like to watch from the sidelines.

  • AnotherPenguin@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    edit-2
    12 hours ago

    For programming, at least it’s a good way to speed up things that you know how to do but take some time to type, or you don’t remember the syntax of. But relying on AI any more than that usually means you’ll be adding free technical debt and debugging time or becoming dependent on it.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    While I also don’t see how it’s productive, it can be useful for certain things, certain steps. But it really seems like you need to have the knowledge in question to help it do a good job.

    People underestimate how much handholding it needs. You can tell it to do something and it might but you may not like the results. However with a bit of interaction or setting context, it might. The pretentious are calling it “prompt engineering” but it’s a combination of asking ai questions and modifying your terminology until it does what you want

    People also don’t seem to understand ai really puts a premium on evaluation. You don’t see it being written but you own it, so you really need to look through the result in detail to understand whether it’s what you wanted. I see this in code a lot where the LLM produces something but a junior developer doesn’t have the skill to evaluate it before committing to source control

  • AdamBomb
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    18 hours ago

    Yeah, don’t generate code with it. Treat it like StackOverflow. It does pretty good at that.

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      17 hours ago

      This is the only way I use it, and I do it grudgingly only because AI has ironically also ruined the web and web search. It’s also a last resort for when Kagi isn’t helping.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      Unfortunately for me it’s a kpi so I need to figure out how to do something useful with it.

      LLM is good for

      • temporary scripts like to export data
      • boilerplate for new code
      • simple or repetitious code like unit tests

      But just in time for my performance review, I spent a week ignoring my work to set and tweak rule sets. Now it can be noticeably more useful

      • set context so it understands your code better. No more stupid results like switching languages, making up a new test framework, or randomly use a different mocking tool
      • create actions. I’m very happy with a code refactoring ruleset I created. It successfully finds refactoring opportunities (matches cyclonatic complexity hotspots) and recommends approaches and is really good at presenting recommendations so I can understand and accept or reject. I tweaked it until it no longer suggests stupid crap, although I really haven’t been able to use much of the code it tries.
      • establish workflow. Still in progress but a ruleset to understand how we use our ticketing system, conventions for commit messages , etc. if I can get it to the point of trusting it, it should automate some of the source control actions and work tracking actions
      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        Just literally make something up and get it to lie about something. This is literally the land of make belief at this point, all this KPI shit. Don’t stress about it. Execs want slop, give em slop.

      • AdamBomb
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        I agree with all that, especially if your performance is being measured by your use of LLMs. Those are cases where I find the code generation to be ok and doesn’t create comprehension debt.

  • 100_kg_90_de_belin@feddit.it
    link
    fedilink
    arrow-up
    5
    ·
    23 hours ago

    I cut my LLM usage to almost zero because of environmental and political reasons, but it was helpful enough to wish it could be sustainable and not another tool in the dystopian take on the world.

    • IronBird@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      17 hours ago

      local models are advanced enough to the point where you can run em as needed without datacenter.

      the datacenter craze is basically just an excuse to get the banks (and eventually the american taxpayer, via bailouts when they fail) to fund your local infrastructure rollout.

      the entire US economy is built around the purposeful boom/bust system, as it’s very effecient at “bagging” people that don’t know the rules.

  • AlecSadler@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    19
    arrow-down
    3
    ·
    1 day ago

    There are right tools and wrong tools depending on the application.

    There are right ways to use said tools and wrong ways…like you wouldn’t use a phillips head screwdriver on a flat head.

    I guarantee your company’s provided tool is Copilot or OpenAI based, which is already bottom of the barrel for usefulness.

  • TBi@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    3
    ·
    2 days ago

    Generally I equate positivity about LLMs with people’s technical ability. I find the more they say AI is good the worse programmer they are.

    • Valmond@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Might be some dunnig kruger curve there. Not tooting my horn but I know my ways around and I only use ai for programming when I more or less know how it works already. Which means I verify and fix any eventual problems before committing any code. It does speed up the process, it’s a tiny bit simpler than checking stuff out on stack overflow IMO.

      Now, if you don’t know your ways around, and “trusts” the outcome on an LLM, boy are you in trouble 😵‍💫.

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      34
      ·
      2 days ago

      Technical literacy in general. My friend thinks it’s the greatest thing ever, is an idiot with technology (and life in general).

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    2 days ago

    every time I use it, i waste hours

    Yes, exactly this. It looks good, I ask for it to tweak something. It tweaks, but now something else needs adjustment. Then it comes back unusable.

    It ends up taking the same time as doing it myself. There’s some value perhaps in either the novelty or engagement that keeps me focused but it’s not more efficient.

    When it does work, I’m always worried it is an illusion I’ve missed something. Like how you send an email and immediately see the typo.

    People who love it, love it because they don’t need to or care about having accuracy and precision in their work. Sales and marketing, management, etc. Business idiots.

    • bonenode@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      I still think back of Linkedin post I saw from someone talking about LLMs and throwing in the sentence:

      ”Apparently I am really great at making super prompts!”

      Which is probably something the LLM told them and they have lost all self-reflection, so…

      • GrindingGears@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        4 hours ago

        Oh man I deleted LinkedIn last month, and March has been glorious.

        I literally feel like there’s hope for humanity now. Like just a little glimmer of it. I didn’t even really use it, but it somehow sucked my soul and shattered it into 1,000 pieces.

    • Liketearsinrain@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      People in this thread unironically saying this. Waiting for the “just wait half a year” model that will be so advanced

  • TankovayaDiviziya@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    22 hours ago

    It’s situation specific. For tabulating data, yes. For everything else, probably not. But the thing is, you have to ask LLM if it can read the raw data to confirm if it is reading it right, before ordering it to execute more complex commands and tasks. You have to define the parameters one by one, one query everytime.

  • Mikina@programming.dev
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    2 days ago

    My experience is that it can work reasonabky well, but you have to waste absurd amount of tokens and have the 1m token context window.

    I only do gamedev, which means a little bit more simple scriots, but it could handle even more involved systems.

    If i force it to first document and explain the wholw architecture and data flow.

    Did it help? Yes, but it still did take only a slightly faster. I could do it in a day more, probably.

    It also cost like 50$ in tokens, in today’s prices - where every AI company is loosing trillions of money, so the costs will get a lot worse. And if I tried to conserve tokens, it’s shit. You have to feed it 10$ of data to be useful.

    Add to that the fact that it also causes skill attrition, so once the expensive-cost future arrives, you probsbly won’t be able to afford it, and good luck getting your skills back after that.

    Our company wants us to use it, and the average token consumption is like 100$ per day in consumer prices. How is that even considered for such a minor gain?

    So, I’ll pass.

    • AA5B@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      There’s enough people who drink the koolaid.

      I helped this one as guy use an LLM to migrate his test suite to java21. It did help him incorporate some new language features but I don’t see how it made up for my time sitting with him

      …… yet to management he saved 20% time. They trust that number despite no actual measurement, and hold it up as efficiency we all need to find.

      But certainly if a 20% efficiency gain were real, that would be well worth $100

    • bridgeenjoyer@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      2 days ago

      Especially once they kill the “old net” and you won’t be able to browse anything to learn skills at all.

      kagi is the only way i can stay sane on web 3.0.