cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

  • @SirGolan
    link
    011 months ago

    I’ve been making the same or similar arguments you are here in a lot of places. I use LLMs every day for my job, and it’s quite clear that beyond a certain scale, there’s definitely more going on than “fancy autocomplete.”

    I’m not sure what’s up with people hating on AI all of a sudden, but there seems quite a few who are confidently giving out incorrect information. I find it most amusing when they’re doing that at the same time as bashing LLMs for also confidently giving out wrong information.

      • @SirGolan
        link
        211 months ago

        The one I like to give is tool use. I can present the LLM with a problem and give it a number of tools it can use to solve the problem and it is pretty good at that. Here’s an older writeup that mentions a lot of others: https://www.jasonwei.net/blog/emergence

    • FaceDeer
      link
      fedilink
      011 months ago

      I suspect it’s rooted in defensive reactions. People are worried about their jobs, and after being raised to believe that human thought is special and unique they’re worried that that “specialness” and “uniqueness” might be threatened. So they form very strong opinions that these things are nothing to worry about.

      I’m not really sure what to do other than just keep pointing out what information we do have about this stuff. It works, so in the end it’ll be used regardless of hurt feelings. It would be better if we get ready for that sooner rather than later, though, and denial is going to delay that.

      • @SirGolan
        link
        211 months ago

        Yeah, I think that’s a big part of it. I also wonder if people are getting tired of the hype and seeing every company advertise AI enabled products (which I can sort of get because a lot of them are just dumb and obvious cash grabs).

        At this point, it’s pretty clear to me that there’s going to be a shift in how the world works over the next 2 to 5 years, and people will have a choice of whether to embrace it or get left behind. I’ve estimated that for some programming tasks, I’m about 7 to 10x faster when using Copilot and ChatGPT4. I don’t see how someone who isn’t using AI could compete with that. And before anyone asks, I don’t think the error rate in the code is any higher.

        • SokathHisEyesOpen
          link
          fedilink
          111 months ago

          I had some training at work a few weeks ago that stated 80% of all jobs on the planet are going to be changed by AI in the next 10 years. Some of those jobs are already rapidly changing, and others will take some time to spin-up the support structures required for AI integration, but the majority of people on the planet are going to be impacted by something that most people don’t even know exists yet. AI is the biggest shake-up to industry in human history. It’s bigger than the wheel, it’s bigger than the production line, it’s bigger than the dot com boom. The world is about to completely change forever, and like you said, pretending that AI is stupid isn’t going to stop those changes, or even slow them. They’re coming. Learn to use AI or get left behind.