Researchers Alex Hanna and Emily M. Bender call on businesses not to succumb to this artificial “intelligence” hype.

  • @DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    011 months ago

    It’s “artificial intelligence” - not “artificially intelligent”. The biggest problem is people are equating one to the other.

    Generally speaking, intelligence references the ability to think, reason, apply logic, etc, whereas describing someone as intelligent generally refers to their “smartness” - how high their intelligence or mental capacity are.

    They are not the same thing. Expecting AI to be intelligent is just plain wrong, especially generative AI.

    • RagnellOP
      link
      fedilink
      011 months ago

      @DeltaTangoLima No, the problem is some marketing guy has redefined artificial intelligence to apply to a machine that has no reasoning ability.

      • morry040
        link
        fedilink
        1
        edit-2
        11 months ago

        Can confirm. My company issued a press release out of the blue last month that explained how we were deploying AI into our products. None of us knew anything about it and we’re assuming that Marketing has just relabelled things like A-B testing, chatbots, and some chatGPT plug-ins as proof that our company has somehow built an AI R&D lab.

        PS. It’s also amusing to hear the execs talk about it on their earnings calls. The investors are just eating it up without question.

  • @zoe@lemm.ee
    link
    fedilink
    011 months ago

    there is no need for it to be intelligent, as long as it gets the job done. hurting consumers and workers that’s another topic.

    • RagnellOP
      link
      fedilink
      1
      edit-2
      11 months ago

      @zoe I think that’s the most important topic. I mean, it would be one thing if this was an experimental thing that was only being used as a way to explore programming options, but there’s guys out there pushing for this to replace all sorts of writing and communication. They’re selling Lifelike Lie Machines as writers. That is going to hurt people.

      • fear
        link
        fedilink
        011 months ago

        but there’s guys out there

        Guys, you say? Don’t worry, it’s just major publications like National Geographic firing off all their writers, tech companies downsizing in the hopes that ChatGPT will code for free, a plague of nonsensical AI written books flooding the market under legitimate authors’ names, and all of Hollywood hoping their writers will work for pennies out of desperation for food and shelter.

        It doesn’t matter that ChatGPT constantly gives wrong answers and has about as much personality as a bran muffin. With the dawn of AI, us humans don’t need humans anymore.

        • Parallax
          link
          fedilink
          011 months ago

          Honestly, as long as they attribute the text to “ChatGPT” or similar, that’d be fine with me. I may or may not read it, but at least be transparent. And at least please deal with the repercussions of firing your staff for the hype.

          • fear
            link
            fedilink
            1
            edit-2
            11 months ago

            Most people seem to be trying to pass off AI creations as their own judging by the exponential flood of AI trash websites, videos, books, news, etc. I’ve encountered people delusional enough to believe it really is their own artwork because they supplied the text prompt to Stable Diffusion. There’s a long way to go before we see transparency.

            I wouldn’t even know how to begin holding people to transparency here. It’s nice that it tends to be obvious when something is AI generated, but I’m sure the clock is ticking to the day no one can tell the difference.