Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’::Currently, Nvidia dominates the market for AI chips, with over 80% market share, according to some estimates.

  • @GenderNeutralBro
    link
    English
    911 months ago

    I’m not worried about that. There will be open competition, because most of this stuff is open-source. Cheaper hardware will open the door for anyone like you or me to set up our own services. Anyone can set up a server with their own hardware (or rent it from Amazon or wherever) and run their own chatbot (with blackjack! and hookers!) instead of using ChatGPT.

    This is already possible on consumer hardware, just not with the biggest and best networks. Right now, if I wanted to run, say, BLOOM (an open-source LLM), I’d need to spend close to $100K on hardware. Obviously, that’s out of reach for a hobbyist, so I’m limited to using smaller, less advanced networks like LLaMa or GPT-J. Cheaper hardware will help break the hold that the big players currently have over the industry.

    • @abhibeckert@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      11 months ago

      if I wanted to run, say, BLOOM (an open-source LLM), I’d need to spend close to $100K on hardware

      Doesn’t that dozens of notes with over a terabyte of RAM each? And state of the art networking?

      Sounds closer to $100M than $100K.

      • @GenderNeutralBro
        link
        English
        511 months ago

        If you want to train your own network like they did, you’d want something like that, yeah, but to run the trained network you “only” need ~360GB of memory.

        For context, even if you wanted to run this in CPU, there are currently no A5 mobos (Ryzen 7000 series) that support more than 192GB of memory. You literally can’t even run it on high-end consumer hardware.