• @Narann@lemmy.world
    link
    fedilink
    English
    201 year ago

    I suspect this will be more and more common in the future.

    I wonder if this can really be stopped.

    I think being transparent about what is AI generated or not is very important so we can choose to support “original content” creator.

    Maybe AI will make “original creators” more recognition.

    • @lloram239@feddit.de
      link
      fedilink
      English
      7
      edit-2
      1 year ago

      I wonder if this can really be stopped.

      We already have Generative Fill in Photoshop. This “fight” was over before it even started, AI will be everywhere going forward. And that’s fine.

      The issue is more how it will be used. It’s very easy to create lackluster art with AI and it can be insanely frustrating trying to get specific and consistent art with AI. But at the same time AI is way better at a lot of things than human artists, textures, lighting, etc. AI nails that every time and humans artists don’t. People often make the mistake of comparing the best of the best of human artists with some average AI image, when the reality of most human art production isn’t exactly the best of the best. Most human art is extremely lackluster, even mainstream 100 million Hollywood movies have posters and box art full of anatomy mistakes and bad copy&paste.

      If AI art is properly directed by a human, you can do great stuff with it and create much better work than just the human by itself. But that requires quite a bit more effort than just typing some text into a prompt and picking the first image.

      As always with art: Process doesn’t really matter, only the result does.

      • Sandra
        link
        fedilink
        21 year ago

        AI will be everywhere going forward. And that’s fine.

        The issue is more how it will be used.

        There are two other pretty big problems. One is that there’s a huge climate impact with runaway energy use, and the other is that it’s a very expensive means of production which leads to further concentration of wealth & power.

        • @lloram239@feddit.de
          link
          fedilink
          English
          21 year ago

          I don’t think energy use is a serious problem, that just seems to get thrown around just because it’s trendy. Does it even matter compared to gaming or crypto? It’s also an easily solved problem, just install more solar. Training the initial model isn’t time critical or depended on location, so there is a lot of flexibility here that you wouldn’t have in other applications. Meanwhile running the already trained model is very cheap, it’s literally the most efficient way to solve the problem. Trying to replicate what StableDiffusion is doing with a 3D renderer and you’d need to burn a heck of a lot more cycles, as well as hire a truckload of artists, which would all use substantially more energy.

          Basically, people are going to use AI when it makes better use of time/money/energy than the competition. Nobody is going to use AI to burn energy just for the fun of it, it has to improve on what we already have.

          As for the concentration of power and wealth, that can certainly happen to some degree, but I could also easily see that get balanced out by the amount of freedom that local models give. Right now I can generate subtitles for video with Whisper, generate voices with tortoise-tts, generate images with StableDiffusion as well as play around with LLMs on my local machine with OpenSource’ish models. Nobody controls what I do and I am not paying for anything. There are obviously still aspects that those models can’t do, local LLMs aren’t up to GPT-4, but already quite close to ChatGPT for some tasks, StableDiffusion isn’t quite as good as Midjourney for plain txt2img, but state-of-the-art in a lot of other aspects (custom training, ControlNet, LORA, etc.). But for a lot of tasks those models are already “good enough” and they are constantly getting better. Meanwhile ChatGPT or BingChat are so heavily censored that they flat out just don’t work for a lot of task, even seemingly simple things like summarizing movies (too much violence). Nobody even talks about DALL-E2 anymore, due to being surpassed by everything else out there.

          Now centralization can still happen, Google is sitting on more data than everybody and if they make some multi-modal model that is trained on it all, that could be a very potent offering. But for the time being at least, everything that was released was outclassed by another thing within a few months. Nothing in the AI space so far lasts very long and the fact that AI models can use other AI models to improve themselves, hopefully makes that continue for a while. With the censorship going on I also have a hard time seeing local models disappearing anytime soon, as so far none of the commercial offerings had the balls to just build a model that knows everything.

          • Sandra
            link
            fedilink
            21 year ago

            I don’t think energy use is a serious problem, that just seems to get thrown around just because it’s trendy. Does it even matter

            Yes, since it’s a rapidly growing field.

            compared to gaming or crypto?

            Proof-of-work based tokens are the enemy and not what we should be comparing things to.

            It’s also an easily solved problem, just install more solar.

            It’s a little trickier than that. Renewable doesn’t mean infinite; we still need to limit consumtion to sustainable rates. Also, there is the hardware in the rigs themselves. Solvents, wiring, metals, plastic…

            Training the initial model isn’t time critical or depended on location, so there is a lot of flexibility here that you wouldn’t have in other applications.

            That’s a good point. It’s less vulnerable to wind or light conditions.

            Meanwhile running the already trained model is very cheap, it’s literally the most efficient way to solve the problem.

            Yep. I never argued against that part. That’s great, as long as we can hold it together and not make new models every fifteen minutes just to keep up with the joneses, but there’s also a drawback to the “expensive to train, cheap to run” model: that’s the very thing that is driving the wealth concentration of big capital like Google.

            Basically, people are going to use AI when it makes better use of time/money/energy than the competition. Nobody is going to use AI to burn energy just for the fun of it, it has to improve on what we already have.

            That would be a perfect argument if we had accounted-for environmental transaction externalities, but we don’t. Using energy is cheaper than it “should” be to account for the environmental impact of that energy use. The old “if I sell you a can of gas, the price of the forest that got wrecked by that gas isn’t factored in” problem. Even otherwise laissez-faire stalwarts like Hayek acknowledged this.

            As for the concentration of power and wealth, that can certainly happen to some degree, but I could also easily see that get balanced out by the amount of freedom that local models give.

            Right; once it does get truly democratized with open source model we can have a post-scarcity pay-it-forward future where the step from dream to reality is smaller than ever before.

            We’ve been through backs and forths of this. The big data mainframe era was replaced by PC. Then that got centralized again in the age of big dialup. But then with broadband everyone could run a server. And then the web 2.0 debacle happened and we got a silo era where people voluntarily started using Google Search and Facebook Messenger and stuff like that to give big capital ownership of our platforms.

            You seem like you have your head on your shoulders (you’re on feddit, after all) but among the general population there’s a lack of awareness around these power&wealth-concentration issues.

            Now centralization can still happen, Google is sitting on more data than everybody and if they make some multi-modal model that is trained on it all, that could be a very potent offering.

            Yes, and I want a plan for that.

            Nothing in the AI space so far lasts very long

            Which is why we’re risking runaway energy use and climate impact.

          • Sandra
            link
            fedilink
            11 year ago

            If you were right about markets only using energy when it made sense, we wouldn’t have this problem:

            A graph showing runaway energy use, rapidly increasing since the 19th century, mostly fossils

            • @lloram239@feddit.de
              link
              fedilink
              English
              21 year ago

              World population and our standard of living have improved drastically over those years too, we aren’t burning that additional energy for nothing.

              • Sandra
                link
                fedilink
                11 year ago

                Yes, the fossil economy has enabled society as a whole to create temporary wealth; the past has borrowed from the present. It’s going to be a rough comedown.

                We haven’t been, and still aren’t, commensurately accounting for our environmental externalities.

        • @kmkz_ninja@lemmy.world
          link
          fedilink
          English
          -41 year ago

          expensive means of production which leads to further concentration of wealth & power.

          That’s only an issue if we continue this brigade of trying to protect artists at everyones expense. Getting enough data to make a usable LLM will be impossible for all but the big players.

          • Sandra
            link
            fedilink
            21 year ago

            As a writer and painter, I’ve long been opposed to copyright and have been releasing stuff under Creative Commons licenses for over a decade. So don’t misinterpret me as agreeing with the brigade.

            Livelyhood for artists is important but so is a livelyhood for everyone, and I’ve been arguing against the flawed “copyright is good for artists” position for decades—we’ve been having this exact same fight against copyright since Napster or even the cassette era. Gates’ infamous “Open letter to hobbyists” was in 1976, and that hasn’t changed.

            There’s a lot of starving artists out there, and a lot of rich publishers. It’s difficult getting food, shelter, medicine and other resources to go around, down here on Earth.

            In a world already deprived by such scarcity, we’d be better off without the shackles of artifical scarcity that copyright introduces.

            I say all that as a lead in because I’m just about to absolutely disagree with part of the following:

            That’s only an issue if we continue this brigade of trying to protect artists at everyones expense.

            As I wrote above, I agree with you re the so-called brigade and have done so publicly in the past, too.

            The myth that IP is a good way to sustain artists’ lives economically is part of the same market capitalism bugged system that has led to the extreme wealth concentration (Google, Microsoft, Amazon) in the first place.

            But what you are replying to, what I wrote, has nothing to do with the pro-copyright stance. I wrote that it’s a very expensive means of production which leads to further concentration of wealth & power.

            Getting enough data to make a usable LLM will be impossible for all but the big players.

            Yeah, if LAION gets shut down. LAION is freely available. The data is not the problem. The resource, hardware, electricity, tensors, e-waste, cooling etc is. And I’m not saying startups and garage operations can’t get their hands on this kinda tech if they can profit from it, as we’ve seen in the proof-of-work “mining” debacle. It’s that since environmental externalities are under-accounted for, that’ll lead to climate-wrecking runaway resource use.

            I have a lot of sympathy for the artists on the other side who are protesting this with whatever futile li’l clogs in the cogs they’ve got; not because I think they’re right about who can learn from art, I disagree with them there, but because they’re a canary in the coal mine for how big capital can use automation to replace workers and how that’ll lead to an even bigger wealth gap (which is already at an historical high) and mass unemployment and economic desperation.

            As Amelia Earhart put it in 1935: “Obviously, research regarding technological unemployment is as vital today as further refinement or production of labor-saving and comfort-giving devices.” And we still haven’t figured that out. And they’re eating at artists, writers, programmers, game designers, economists, cooks, doctors, drivers, postal workers, psychologists—no one is safe. We need to figure out a way to distribute tasks and resources differently in a world where there’s a heck of a lot fewer tasks and a lot more digital resources (while physical resources like fuel and food and shelter are still limited). Politics is also going to get harder since money correlates withnpower, no matter how much we’ve been trying to fight that corruption.

            Markets use prices to distribute resources, and prices are set by supply and demand, and that started breaking down in the cassette and floppy disk age where making the initial recording was very expensive but making copiesnof that was cheap. Big capital has tried to patch the hole to their advantage at the expense of the public by introducing artificial scarcity in the form of an exclusive right to make copies, “copyright”.

            And now it’s getting twisted one more turn, since now the initial work itself is easy to make, but the models, the makers themselves, are wholly owned by big corporations like Microsoft and Google. Capitalism was bad before. It’s going to get cataclysmic now that the workers are wholly owned machines.

            @kmkz_ninja@lemmy.world @boardgames@feddit.de

            • Turun
              link
              fedilink
              English
              11 year ago

              For large language models you have a good point. The space is dominated by closed source company OpenAI, the open source ai models don’t come close. This is indeed a worrying development. The current models are simply really really expensive to run, so hobbyists can’t contribute in a meaningful way.

              But for image generation you basically only have stable diffusion and midjourney. And I’d argue stable diffusion is much more widely used due to the control it gives and it can easily be run on consumer hardware. Customizing a model is also possible and takes only a few hours on a modern gaming computer.

  • @sunbeam60@lemmy.one
    link
    fedilink
    English
    141 year ago

    I’m so unbothered by this. It’s sad for illustrators (and I say this as somebody with a daughter who dreamt of becoming a concept artist, and now clearly understands this isn’t going to happen) but time marches on.

    We don’t have type setters any more. Cars have (largely) replaced horses.

    I think the best compromise I’ve heard is: AI generated output hasn’t been made by a human so can’t be copyrighted.

  • @FMT99@lemmy.world
    link
    fedilink
    English
    131 year ago

    I may be in the minority here when I say I don’t see the problem. AI trained on millions of publicly available images used to speed up the concept stage of development seems like fair use to me. Like the developer says, commercial artists have always used other folks work to speed up their development, that sounds more problematic to me than drawing inspiration from a huge dataset.

    • @FlowVoid@midwest.social
      link
      fedilink
      English
      71 year ago

      “Fair use” has a specific meaning in copyright law. If something replaces the need for something else in the market, it’s almost certainly not fair use. Generative AI replaced the need to hire an original artist.

      • @lloram239@feddit.de
        link
        fedilink
        English
        61 year ago

        Copyright is about specific works, not vague ideas or styles. You can’t just claim that work X violated the copyright of work Y because it has some similarities and competes with it in the market place. You have to show that work X copied substantial parts of Y. And with AI models that’s going to be difficult, as the average image contributes about a single byte of information to the model. When the model was properly trained, without excessive duplicates, there is no way to get back to the original image from the model (some exceptions do exist here, e.g. Mona Lisa).

        It also worth pointing out that in all these months of discussion, nobody ever managed to show a single image that the AI model would violate the copyright of. If AI stole your stuff, it shouldn’t be that hard to find some evidence for that.

        • @FlowVoid@midwest.social
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          Copyright can be violated even if your output does not contain a copy.

          For example, if I burn a copy of your Disney DVD, watch it, and then write a review, then I’ve violated copyright. The review doesn’t violate copyright, but the DVD I burned does. Even if I throw away my DVD after publishing my review.

          All the major AIs were trained with images that were downloaded from the web. When you download something from the web, you do not have an unlimited license to do what you want with your download. You may have a right to view it, but not use it for commercial purposes such as AI training. And if you use that image for AI training without permission, then you’ve violated copyright. Even if you delete the image after you’re done training your AI.

          • @lloram239@feddit.de
            link
            fedilink
            English
            61 year ago

            For example, if I burn a copy of your Disney DVD, watch it, and then write a review, then I’ve violated copyright. The review doesn’t violate copyright, but the DVD I burned does. Even if I throw away my DVD after publishing my review.

            1. No, you haven’t. Making private copies is completely fine under copyright, that was decided back in the VHS days. You might violate DMCA, but that’s not an issue for AI, as everything they were trained on was publicly available and unencrypted.

            2. When you download from the Internet, the server makes the copy, not you.

            3. Your review still didn’t violate copyright. You are even free to include some images of the movie in your review under Fair Use, as long as they are small and insubstantial enough to not stop people from seeing the movie.

            4. Watch any art tutorial, step one is gathering reference images from the Internet. If that would violate copyright, than a lot of artists would be in big trouble.

            • @FlowVoid@midwest.social
              link
              fedilink
              English
              0
              edit-2
              1 year ago
              1. The SCOTUS ruled that VHS could legally be used to time-shift TV broadcasts, ie record a program in order to personally watch later. If you have permission to watch a TV program, then watching it at a different time has no economic impact and is fair use. Making a copy of someone else’s DVD is still illegal. So is giving your VHS tape to someone else. They are not fair use.

              2. It is illegal to download copyright protected works. That applies to the person who receives the download, even if lawsuits tend to target those who share the file.

              3. It’s true the review itself doesn’t violate copyright, but my actions prior to the review (copying someone else’s DVD) did. It’s no different than sneaking into a movie theater in order to write the review. Focusing on the review misses the point

              4. Any copyright protected work you gather from the Internet has a limited license. That license generally allows private non-commercial use, so most people are not in trouble.

              There was actually a lawsuit by Facebook against a company that was using a web scraper to gather data about Facebook users to build advertising trackers. The judge noted that if the web scraper was downloading user photographs and text posts then it was very likely infringing IP (but not Facebook’s IP, because the rights still belonged to the users).

  • @stormesp@lemm.ee
    link
    fedilink
    English
    41 year ago

    I mean, in todays board game space with so many classics and new releases its cool to know two companies i dont have to bother buying games from

    • @thorbot@lemmy.world
      link
      fedilink
      English
      41 year ago

      Why do you care? Their lead artist is the one using the AI tools and it’s not just full generation of images, they use it as a reference tool and as a filler. People are up in arms about AI art but are oblivious to the context in which it’s used. If a studio fires their artists and only uses AI, sure, that’s a bad thing. That’s not even remotely what’s happening at Fryx Games.

  • @jet@hackertalks.com
    link
    fedilink
    English
    31 year ago

    I’m excited. As long as the output is curated. It allows small developers to make really exciting large projects. On a small budget. So we’re going to see a diversity in the creative space.

    And this isn’t going to kill artists. We’re going through a evolutionary period, where the source art is going to have some wonderful debate in the copyright scheme. But you still need a source concept in order to generate from.