idk if it is serious or not, but it is what I saw in indeed newsletter today.

  • garretble@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 hours ago

    So I ran into my first genAI coding junk yesterday when I was on a call with my boss and as a solution to a problem we were talking he said, “hold on let me ask Gemini.”

    I felt my soul die a little bit at that point.

    But the fun part is that Gemini first didn’t provide a good answer.

    And then on the second go it also didn’t provide a good answer.

    And then on the third attempt we decided to table the issue for the moment because prompt coding on a call was taking longer than I think he expected.

    I really disliked that experience.

    • CanadaPlus
      link
      fedilink
      arrow-up
      1
      ·
      36 minutes ago

      Hmm, was the boss hoping to turn that into a “why do I even pay you” moment?

  • Nemoder@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    10 hours ago

    You know the “vibes” of different models - when to use

    Would that be a vibe-rater?

  • ☂️-@lemmy.ml
    link
    fedilink
    arrow-up
    67
    ·
    1 day ago

    programming was never about how fast you could type. the person who wrote this knows nothing about the job.

    • anon_8675309@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      17 hours ago

      And yet somehow the tech blogs and such always scream about developer productivity. Go faster. Go faster.

      From what I’ve seen over the years, only mids care about finishing fast.

      • CanadaPlus
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        34 minutes ago

        Yes, but quality takes actual skill to measure, instead of just a diff.

        (Although I guess lines are still better than time in office)

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      16 hours ago

      The guy who wrote this is an idiot, but he became so in a world where “LoC” is a metric – one that Goodhart would love, but alas.

      This is honestly the road to hell and the ~good intentions in one.

  • AdamBomb
    link
    fedilink
    English
    arrow-up
    47
    ·
    1 day ago

    natural language is the new programming language

    lol. Lmao.

      • CanadaPlus
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        20 minutes ago

        And even this improvement wasn’t universally appreciated: some people found error messages they couldn’t ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate “the ease of programming” with the ease of making undetected mistakes.

        This guy was writing in the year x86 was first introduced, and I still feel like I see this attitude around.

        (He manages to shoehorn in a “kids these days” paragraph too, though)

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 hours ago

        All he made was some dinky algorithm. Google Bard could do that in three minutes flat smh.

      • See, Dijkstra was talking about people trying to create programs in natural language. He didn’t say not to use your natural language to hire someone else to make a formal program. This is people using natural language to hire an LLM to make a formal program, and asking LLMs is like asking people, so it’s Dijkstra-approved. smuglord

  • unmagical@lemmy.ml
    link
    fedilink
    arrow-up
    60
    ·
    2 days ago

    Spot security vulnerabilities instantly from a candidate that can’t actually write code.

  • saltnotsugar@lemmy.world
    link
    fedilink
    arrow-up
    56
    ·
    2 days ago

    I need to hire someone to take this functional 15 lines code, and like make it 200 lines of unusable madness.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Oh, man, I don’t know how much is Claude’s fault and how much is just the way the world has moved, but I coded a hobby project in C a bit over 20 years ago, brought in one library to render the graphics as .jpg files and the whole thing was like 300 lines of code.

      Claude “modernized” it for me, and yeah, it shows on a browser as a PWA and it’s working correctly (this time, via Opus 4.6 - first time I tried with Sonnet 4.0 it couldn’t even make it work correcty) - but daaaaammn, there’s like 454 files in deps, 1.4GB in the rust target folder - maybe it’s just a rust thing?

      • ferric_carcinization@lemmy.ml
        cake
        link
        fedilink
        English
        arrow-up
        4
        ·
        20 hours ago

        Rust & cargo do more than just compile. For example, it basically has buit-in ccache.

        It is also easier to split large libraries into multiple crates, though an average project still uses more libraries than an equivalent C project. I wouldn’t be surprised if the “AI” also pulled in more libraries than needed, or has unnecessary library features enabled. I’m pretty sure that a cargo plugin for pruning unused libraries was featured on the rust blog, as a featured third-party plugin for a cargo release.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          In C++ land, I lived in Qt for 20 years. It did… most things, so if you “just” imported Qt (or Boost or massive API environment of your choice) you could usually do most things “just” importing one or two additional external libraries. I frequently would split a system into “micro-ish-services” with each service importing one or a few of these novel external libraries, partly to isolate them so unexpected interference at least wasn’t coming from within the process, also as damage control incase one behaved badly it could be excised at runtime without taking down the larger system.

          Rust feels even more like a case for cooperating microservices, but it does seem to bulk them up fast - faster than Qt, and that’s saying something.

  • Ŝan • 𐑖ƨɤ@piefed.zip
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    6
    ·
    2 days ago

    We did it to ourselves. Developing mission-critical systems in scripting languages and always sacrificing quality for delivery. Fast and sloppy paid þe bills, but we were digging our own graves. Once industry became used to sloppy software, a relatively mild shift to even more crappy, but far cheaper and more immediate software was a no-brainer. Customers haave gotten used to shitty, buggy software. It doesn’t matter to þem who’s writing it.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      23
      arrow-down
      1
      ·
      2 days ago

      The only way for us to not “do this to ourselves” is to form unions. Otherwise we aren’t driving the decisions on what is used and what’s prioritized at all.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        Safety critical (aerospace, medical, precious few other) industries have regulated quality, with moderate success. It’s far from perfect, farther from ideal, but it is providing some additional resource and schedule allocation to do the things that need doing to ensure the systems don’t screw up too badly, too often.

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          Am in automotive and there’s definitely some of that. Much more so than in other industries I’ve worked. With that said, it’s a losing battle against the value proposition of AI. We’re getting AI use mandated on us.

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            I’m in one of those others I mentioned (and I try not to reference my company online because of… reasons), and we’re getting strongly encouraged to “integrate AI in our daily workflows, where it makes sense” - not just coding, but coding is an obvious target. As a business we tend to change slowly, so this will be… interesting.

            • Avid Amoeba@lemmy.ca
              link
              fedilink
              arrow-up
              4
              ·
              1 day ago

              Sounds almost like we work for the same company. 😂 Perhaps they all lifted this statement from the same consultancy contractor.

    • MeetMeAtTheMovies [they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      I wrote an app for my wife and it was really sad watching her just fumble past bugs instead of pointing them out when I was literally watching over her shoulder to get feedback on what needed fixed. I had to tell her several times, “No, don’t just keep reloading. What’s wrong?” Like we’ve all been trained so hard to accept shitty software that even when I could fix stuff easily I know people are just passively accepting the bugs.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        One of my junior devs was having trouble with a bug in an internally developed tool, apparently for weeks before I saw her struggling with it over her shoulder - it was a 5 minute fix, I hope I made it clear to her: speak up when something’s wrong - this 5 minute fix has cost you many hours already because you never told me you were having a problem.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      1 day ago

      Developing mission-critical systems in scripting languages

      This is a wild take. If you’d come up in the 80s you’d be complaining about using C instead of hand-writing assembly.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        In the 80s the hand written assembly was more reliable and performant than the C, at least on many of the compilers.

        Even in 1990, I tried to launch a serious project in C++ on the IBM-PC, and the best available compiler was too buggy to use. It did fine for little demo apps, but by the time you wrote code for 2 weeks, you started hitting bugs - not in your code but in the compiler output… we had to fall back to C for the project. Even later, around 1994, we had two C compilers for 6811 work and one of them was garbage, I could hand write the assembly better and faster without even trying hard. The other one was pretty good, and by the late 1990s I stopped looking at C/C++ compilers’ assembly output because it was consistently better than I would write by hand.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          There were already plenty of reliable compilers at least for the main architectures in use. Replace C with Fortran though if you prefer - complaining about python in mission critical software is a brain-dead take that belongs in the bin of history.

    • I_am_10_squirrels@beehaw.org
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      2 days ago

      Me: I want SoaD!

      Mom: we have SoaD at home

      At home: SotA, featuring such hits as

      Sorta poisonous

      lo mein

      Let someone else bring the bombs

    • Petter1@discuss.tchncs.de
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      But they use curser and cloud (probably meaning claude as it is used in curser pro)

      Isn’t claude code considered SOTA vibe coding right now?

      And i understood it like you can choose what fancy tool you use. The vibe manager who generated this, probably, just told their LLM to use SOTA AI coding tools in their prompt for this job description.

      • MeetMeAtTheMovies [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        The SOTA changes every couple weeks, but Claude’s been very dominant for a while, yeah. There’s currently a lot of hype around GPT-5.4, but even then there’s a caveat that Claude is still better at UI.

        I just personally find Cursor to be pretty buggy. But I think the Replit mention is more of a tell that someone vibe codes but doesn’t actually code. It’s been advertised to people as a way to build end to end apps without any coding experience. And to be fair, they’ve done a good job of building on the past decade of work in the Typescript community to make an entire app end to end type safe and therefore checkable by the compiler. Convex has done something similar in a way that I prefer and in my experience LLMs are very good at working in Convex projects as well.

        Really at the end of the day I was just being pithy. Kind of poking fun at how much of a moving target SOTA is.