ChatGPT has a style over substance trick that seems to dupe people into thinking it’s smart, researchers found::Developers often prefer ChatGPT’s responses about code to those submitted by humans, despite the bot frequently being wrong, researchers found.

  • sj_zero
    link
    fedilink
    1111 months ago

    Anyone who has actually needed a correct answer to a question realized this a long time ago.

    The problem is that most people don’t bother checking the answers.

    • @GenderNeutralBro
      link
      English
      511 months ago

      If you need a correct answer, you’re doing it wrong!

      I’m joking of course, but there’s a seed of truth: I’ve found ChatGPT’s wrong or incomplete answers to be incredibly helpful as a starting point. Sometimes it will suggest a Python module I didn’t even know about that does half my work for me. Or sometimes it has a lot of nonsense but the one line I actually need is correct (or close enough for me to understand).

      Nobody should be copying code off Stack Overflow without understanding it, either.

      • sj_zero
        link
        fedilink
        211 months ago

        I won’t pretend, it can be very useful as long as you know what it is and the fact that very often it will make stuff up out of whole cloth.

        I was trying to figure out how to make a change to a rest program that I use (lotide which I’m posting from, actually) but I don’t know anything about rust. So it ended up leading me down the rabbit hole with a library that just didn’t exist, and all kinds of routines that didn’t exist, but ultimately I did get there. Ended up using regex instead.

        • @GenderNeutralBro
          link
          English
          111 months ago

          So it ended up leading me down the rabbit hole with a library that just didn’t exist

          LOL damn. I haven’t had that experience myself, but that’s probably because it has more training data on Python than Rust.

          I think the future of AI will be A) more specialized training, and B) more “dumb” components to keep it on track.

    • @sumofchemicals@lemmy.world
      link
      fedilink
      English
      011 months ago

      This hasn’t been my experience. Yes, chatgpt gets stuff wrong, and fairly regularly. But I can ask it my question directly, and can include sample code, and I get an answer immediately. Anyone going on stack overflow has to either google around and sift through answers for relevance, or has to post the question and wait for someone to respond.

      With either chatgpt or stack you have to check the answer to make sure it works - that’s how coding goes. But one I know if it works or not pretty much immediately with fairly low investment of time and effort. And if it doesn’t, I just rephrase the question, or literally say “that doesn’t seem to work, now I’m getting this error: $error”

      • sj_zero
        link
        fedilink
        011 months ago

        When it gets stuff wrong though, it doesn’t just get stuff wrong, it gets stuff completely made up. I’ve seen it create entire apis, I’ve seen it generate legal citations out of whole cloth and entire laws that don’t exist. I’ve seen it very confidently tell me to write a command that clearly doesn’t work and if it did then I wouldn’t be asking a question.

        But I don’t think that the alternative to chat GPT would even be stackoverflow, it would be an expert. Given the choice between the two, you would definitely want an expert every time.

        • @sumofchemicals@lemmy.world
          link
          fedilink
          English
          111 months ago

          You’re right that it completely fabricates stuff. And even with that reality, it improves my productivity, because I can take multiple swings and still be faster than googling. (And sometimes might just not find an answer googling)

          Of course you’ve got to know that’s how the tool works, and some people are hyping it and acting like it’s useful in all situations. And there are scenarios where I don’t know enough about the subject to begin with to ask the right question or realize how incorrect the answer it’s giving is.

          I only commented because you said you can’t get the correct answer, and that people don’t check the answer, both of which I know from my and my friends actual usage is not the case.

      • @the_medium_kahuna@lemmy.world
        link
        fedilink
        English
        611 months ago

        But the fact is that you need to check every time to be sure it isn’t the rare inaccuracy. Even if it could cite sources, how would you know it was interpreting the source’s statements accurately?

        imo, it’s useful for outlining and getting ideas flowing, but anything beyond that high level, the utility falls off pretty quickly

        • DreamButt
          link
          fedilink
          English
          311 months ago

          Ya it’s great for exploring options. Anything that’s raw textual is good enough to give you a general idea. And moreoftenthannot it will catch a mistake about the explanation if you ask for a clarification. But actual code? Nah, it’s about a 50/50 if it gets it right the first time and even then the style is never to my liking