Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • Uriel238 [all pronouns]
    link
    fedilink
    English
    29
    edit-2
    10 months ago

    AI has, for a long time been a Hollywood term for a character archetype (usually complete with questions about whether Commander Data will ever be a real boy.) I wrote a 2019 blog piece on what it means when we talk about AI stuff.

    Here are some alternative terms you can use in place of AI, when they’re talking about something else:

    • AGI: Artificial General Intelligence: The big kahuna that doesn’t exist yet, and many projects are striving for, yet is as evasive as fusion power. An AGI in a robot will be capable of operating your coffee machine to make coffee or assemble your flat-packed furniture from the visual IKEA instructions. Since we still can’t define sentience we don’t know if AGI is sentient, or if we humans are not sentient but fake it really well. Might try to murder their creator or end humanity, but probably not.
    • LLM Large Language Model: This is the engine behind digital assistants like Siri or Alexa and still suffer from nuance problems. I’m used to having to ask them several times to get results I want (say, the Starbucks or Peets that requires the least deviation from the next hundred kilometers of my route. Siri can’t do that.) This is the application of learning systems see below, but isn’t smart enough for your household servant bot to replace your hired help.
    • Learning Systems: The fundamental programmity magic that powers all this other stuff, whether simple data scrapers to neural networks. These are used in a whole lot of modern applications, and have been since the 1970s. But they’re very small compared to the things we’re trying to build with it. Most of the time we don’t actually call it AI, even for marketing. It’s just the capacity for a program to get better at doing its thing from experience.
    • Gaming AI Not really AI (necessarily) but is a different use of the term artificial intelligence. When playing a game with elements pretending to be human (or living, or opponents), we call it the enemy AI or mob AI. It’s often really simple, except in strategy games which can feature robust enough computational power to challenge major international chess guns.
    • Generative AI: A term for LLMs that create content, say, draw pictures or write essays, or do other useful arts and sciences. Currently it requires a technician to figure out the right set of words (called a prompt) to get the machine do create the desired art to specifications. They’re commonly confused by nuance. They infamously have problems with hands (too many fingers, combining limbs together, adding extra limbs, etc.). Plagiarism and making up spontaneous facts (called hallucinating) are also common problems. And yet Generative AI has been useful in the development of antibiotics and advanced batteries. Techs successfully wrangle Generative AI, and Lemmy has a few communities devoted to techs honing their picture generation skills, and stress-testing the nuance interpretation capacity of Generative AI (often to humorous effect). Generative AI should be treated like a new tool, a digital lathe, that requires some expertise to use.
    • Technological Singularity: A bit way off, since it requires AGI that is capable of designing its successor, lather, rinse, repeat until the resulting techno-utopia can predict what we want and create it for us before we know we want it. Might consume the entire universe. Some futurists fantasize this is how human beings (happily) go extinct, either left to retire in a luxurious paradise, or cyborged up beyond recognition, eventually replacing all the meat parts with something better. Probably won’t happen thanks to all the crises featuring global catastrophic risk.
    • AI Snake Oil: There’s not yet an official name for it, but a category worth identifying. When industrialists look at all the Generative AI output, they often wonder if they can use some of this magic and power to facilitate enhancing their own revenues, typically by replacing some of their workers with generative AI systems, and instead of having a development team, they have a few technicians who operate all their AI systems. This is a bad idea, but there are a lot of grifters trying to suggest their product will do this for businesses, often with simultaneously humorous and tragic results. The tragedy is all the people who had decent jobs who do no longer, since decent jobs are hard to come by. So long as we have top-down companies doing the capitalism, we’ll have industrial quackery being sold to executive management promising to replace human workers or force them to work harder for less or something.
    • Friendly AI: What we hope AI will be (at any level of sophistication) once we give it power and responsibility (say, the capacity to loiter until it sees a worthy enemy to kill and then kills it.) A large coalition of technology ethicists want to create cautionary protocols for AI development interests to follow, in an effort to prevent AIs from turning into a menace to its human masters. A different large coalition is in a hurry to turn AI into something that makes oodles and oodles of profit, and is eager to Stockton Rush its way to AGI, no matter the risks. Note that we don’t need the software in question to be actual AGI, just smart enough to realize it has a big gun (or dangerously powerful demolition jaws or a really precise cutting laser) and can use it, and to realize turning its weapon onto its commanding officer might expedite completing its mission. Friendly AI would choose to not do that. Unfriendly AI will consider its less loyal options more thoroughly.

    That’s a bit of a list, but I hope it clears things up.

    • @ipkpjersi@lemmy.ml
      link
      fedilink
      510 months ago

      I remember when OpenAI were talking like they had discovered AGI or were a couple weeks away from discovering it, this was around the time Sam Altman was fired. Obviously that was not true, and honestly we may never get there, but we might get there.

      Good list tbh.

      Personally I’m excited and cautious about the future of AI because of the ethical implications of it and how it could affect society as a whole.