Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.

Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.

In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.

  • @Drewelite@lemmynsfw.com
    link
    fedilink
    English
    15 months ago

    A.I. exists. It will continue to get better. If letting people use it becomes illegal, they’ll just use it themselves and cut us out. A world where the general population have access to A.I. is the only one where we’re not totally fucked. I’m not simping for Google or Facebook, I’d much prefer an open source self hostable version. The only way we can stay competitive is if these companies continue to develop these in the open for the consumer market.

    General purpose artificial intelligence will exist. Full stop. Intelligence is the most valuable resource in the universe. You’re not going to stop it from existing, you’re just going to stop them from sharing it with you.

    • @megopie@beehaw.org
      link
      fedilink
      3
      edit-2
      5 months ago

      What they have, is miles from artificial general intelligence, it is not AI in even a limited sense. It is AI in the same way a mob in a video game is AI.

      Their claims to be approaching it are marketing fluff at best, and abject lies at worst.

      • @Drewelite@lemmynsfw.com
        link
        fedilink
        English
        25 months ago

        I think if we sit here and debate the nuances of what is or is not intelligence, we will look back on this conversation and laugh at how pedantic it was. Movies have taught us that A.I. is hyper-intelligent, conscious, has it’s own objectives, is self aware, etc… But corporations don’t care about that. In fact, to a corporation, I’m sure the most annoying thing about intelligence right now is that it comes packaged with its own free will.

        People laugh at what is being called A.I. because it’s confidently wrong and “just complicated auto-complete”. But ask your coworkers some questions. I bet it won’t be long before they’re confidently wrong about something and when they’re right, it’ll probably be them parroting something they learned. Most people’s jobs are things like: organize these items on those shelves, mix these ingredients and put it in a cup, get all these numbers from this website and put them in a spreadsheet, write a press release summarizing these sources.

        Corporations already have the A.I. they need. You gatekeeping intelligence is just your ego protecting you from the truth: you, or someone dear to you, are already replaceable.

        I think we both know that A.I. is possible, I’m saying it’s inevitable, and likely already at version 1. I’m sure any version of it would require access to training data. So the ruling here would translate. The only chance the general population has of keeping up with corporations in the ability to generate economic value, is to keep the production of A.I. in the public space.