I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that’s going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn’t actually what you want, then what’s your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it’s likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we’re only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I’m posting this in a hostile space, and I’m sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that’s fine (the jury is literally still out on that). What I’m interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don’t is the absolute worst possibility.

  • @SirGolan
    link
    1
    edit-2
    1 year ago

    I’ve been saying the same thing. If IP holders are successfully suing companies who release or allow public access to their AI models that were trained on their IP, those companies are not going to release their stuff and just keep it to themselves. This is one way we can get to the dystopia outcome of AI.

    Really what I think is happening here is some lawyers who haven’t thought about the end result or don’t care as long as they get theirs are soliciting IP holders to sue because no matter who wins, the lawyers get paid.

    What I think might be a reasonable solution would be to not call training AI on copyrighted material copyright infringement, but instead if it produces a clearly derivative work that a person uses in a way that would infringe on the copyright then go after that person.