ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

  • @GenderNeutralBro
    link
    English
    511 months ago

    If you need a correct answer, you’re doing it wrong!

    I’m joking of course, but there’s a seed of truth: I’ve found ChatGPT’s wrong or incomplete answers to be incredibly helpful as a starting point. Sometimes it will suggest a Python module I didn’t even know about that does half my work for me. Or sometimes it has a lot of nonsense but the one line I actually need is correct (or close enough for me to understand).

    Nobody should be copying code off Stack Overflow without understanding it, either.

    • sj_zero
      link
      fedilink
      211 months ago

      I won’t pretend, it can be very useful as long as you know what it is and the fact that very often it will make stuff up out of whole cloth.

      I was trying to figure out how to make a change to a rest program that I use (lotide which I’m posting from, actually) but I don’t know anything about rust. So it ended up leading me down the rabbit hole with a library that just didn’t exist, and all kinds of routines that didn’t exist, but ultimately I did get there. Ended up using regex instead.

      • @GenderNeutralBro
        link
        English
        111 months ago

        So it ended up leading me down the rabbit hole with a library that just didn’t exist

        LOL damn. I haven’t had that experience myself, but that’s probably because it has more training data on Python than Rust.

        I think the future of AI will be A) more specialized training, and B) more “dumb” components to keep it on track.