• flan [they/them]
    link
    fedilink
    English
    32
    edit-2
    7 months ago

    I dont think this LLM in everything trend is going to last very long. It’s way too expensive for it to be in literally all consumer things. I can imagine it finding some success in B2B applications but who is going to pay Logitech to pay OpenAI $30 per million tokens? (Lambda for comparison is $0.20 per 1M requests if you pay the public rate)

    • FourteenEyes [he/him]
      link
      fedilink
      English
      237 months ago

      There will be another massive financial recession when it finally dawns on them this shit was never gonna make any fucking money for anyone

      • Owl [he/him]
        link
        fedilink
        English
        287 months ago

        The crypto bubble lasted a long time, and unlike it, AI actually does something (not anything useful, or terribly well, but something), so I expect the bubble will last a while yet.

        • hexaflexagonbear [he/him]
          link
          fedilink
          English
          257 months ago

          Throwing unlimited money and resources at the “make customer support chat bots 3% better” technology while the world burns.

    • @Monument
      link
      English
      1
      edit-2
      7 months ago

      I disagree, because I think what will happen is that these companies won’t use “AI” that is hosted in the cloud, but will instead send some minimally functional model to users that runs on their GPU, and later NPU (as those become common), and engage in screen recording and data collection about you and everything the mouse clicks on.
      Disabling AI/data collection will disable any mouse technology or feature implemented after 1999, because AI or something.

      At this point, I think AI stands for “absolute intrusion” when it comes to consumer products.

      • flan [they/them]
        link
        fedilink
        English
        27 months ago

        I don’t really see why they need AI for that but yes I imagine companies will want to deploy AI on user equipment. These aren’t going to be nearly as sophisticated or useful as what can run in the cloud though.

        • @Monument
          link
          English
          17 months ago

          That’s sort of the point. It’s not really that the AI is useful, it’s that it’s the next big unregulated and misunderstood thing.

          Companies are using the idea of “training models” to harvest user data well beyond any reasonable scope, so they can sell it.
          The breadth of information that’s being openly collected under the guise of ‘AI’ was unconscionable 10 years ago, and even 5 years ago, folks would have been freaked out. Now businesses are pretending it’s just a run of the mill requirement to make their software work.

          Case in point of how commodified our data is: Kaiser Permanente intentionally embedded tracking software in their site and now has to class the collected data as a user data breach. These products are likely from Google, Facebook, Adobe, Microsoft, or Salesforce. And they share the collected data, which can easily be de-anonymized to their advertising partners, who share it with their partners, until it winds up in the database of a data broker. This has been known to be an issue for awhile: Some Hospital Websites May Be Violating Privacy Rules By Sharing Data With Third-Party Trackers.

          Anyway, sorry. Soapbox. I’ll put it away.