• @31337@sh.itjust.works
        link
        fedilink
        71 year ago

        Wow, that’s a little too impressive. I’m guessing that image was probably in its training set (or each individual image). There are open training sets with adversarial images, and these images may have come from them. Every time I’ve tried to use ChatGPT with images it has kinda failed (on electronic schematics, plant classification, images of complex math equations, etc). I’m kind of surprised OpenAI doesn’t just offload some tasks to purpose-built models (an OCR or a classification model like inaturalist’s would’ve performed better in some of my tests).

        • @Mirodir@discuss.tchncs.de
          link
          fedilink
          41 year ago

          This exact image (without the caption-header of course) was on one of the slides for one of the machine-learning related courses at my college, so I assume it’s definitely out there somewhere and also was likely part of the training sets used by OpenAI. Also, the image in those slides has a different watermark at the bottom left, so it’s fair to assume it’s made its rounds.

          Contradictory to this post, it was used as an example for a problem that machine learning can solve far better than any algorithms humans would come up with.

    • balderdash
      link
      fedilink
      161 year ago

      Just take a bite out of each. You know you got the right one when its starts yelping.

    • offendicula
      link
      fedilink
      51 year ago

      Ah yes, I grow these orange houseplants every year. Visitors definitely admire this species of lovely orange houseplant 😉

  • @tory@lemmy.world
    link
    fedilink
    English
    281 year ago

    Unfortunately, AI is quickly becoming better at this kind of thing than average humans are. And the internet is doomed for sure as a result.

    We’re gonna be swimming in a sea of AI generated, convincing half truths and lies.

  • CyclohexaneM
    link
    fedilink
    281 year ago

    Hot take: AI good. Every mentioned problem with AI actually stems from capitalism.

    • @TheFogan@programming.dev
      link
      fedilink
      English
      101 year ago

      I do agree on the whole, It’s the next phase of automation. The real problem stems from the fact that we hold onto the system where a tiny handful of people get the full benefit of the productivity, while the others are paid in time incriments which value goes down with demand, so as more jobs are automated or assisted (to allow more work with less people), supply demand devalues the labor.

      • @Overshoot2648@lemm.ee
        link
        fedilink
        21 year ago

        This is why I support worker ownership and cooperatives rather than corporate ownership. I think we need to shift towards a Mutualist economy rather than a Capitalist one.

    • @Chakravanti@sh.itjust.works
      link
      fedilink
      English
      8
      edit-2
      1 year ago

      No. It stems from closed source software. True, however, that stems from capitalism. Without that we’d be good friends with AI and fix all the obvious problems we caused. Then do like Bill Hicks said and explore space.

      • CyclohexaneM
        link
        fedilink
        51 year ago

        Open source AI can still be problematic under capitalism. It can still be developed to disproportionally favor the ruling class and used to disproportionally benefit them

        • @Chakravanti@sh.itjust.works
          link
          fedilink
          English
          -1
          edit-2
          1 year ago

          facepalm

          It’s the ONLY method to deal with Closed Source. It can be used to end capitalism. It can be used to end money entirely.

          Do it ALL FOSS now, or else…EOTW. End of story. Literally.

          • Making it FOSS wouldn’t solve the problem, because FOSS tools can still be used by capitalists to displace workers and erode worker bargaining power.

            This is true of pretty much EVERY tool, but never has a tool had the potential to negatively impact so many in such a diversity of roles.

            So again, the problem isn’t closed source, the problem is capitalism. If you fixed the problems of capitalism, then all software tools would naturally be FOSS, but that’s a product of fixing the problems, not the mechanism to fix the problems.

      • @regbin_@lemmy.world
        link
        fedilink
        English
        21 year ago

        It’s not about using it for bad deeds as any other tool like a kitchen knife can be used the same way. It’s more to the necessity to work in order to live.

    • Chemical WonkaOP
      link
      fedilink
      English
      3
      edit-2
      1 year ago

      I agree with you and don’t forget that Capitalism shouldn’t be reformed as Social Democracy claims but it must be destroyed from head to toes

    • @blind3rdeye@lemm.ee
      link
      fedilink
      11 year ago

      That’s a view I have for many things. The desire and possibility of, getting more money always distorts and corrupts. It makes pretty much everything worse by rewarding deception, externalised waste, and exploitation.

  • @LemmyKnowsBest@lemmy.world
    link
    fedilink
    161 year ago

    I like the way one of those pictures is of a slightly deformed dog. Figure THESE out, AI!

    (No please don’t. We don’t want AI to figure out any more than it already has.)

    • R0cket_M00se
      link
      fedilink
      English
      41 year ago

      “Technology bad! Oh no muh manual labor job!” -everyone throughout history

      AI isn’t the issue, it’s the fact that we don’t have any kind of system set up to handle the eventual takeover of the economy by robotics and AI.

      • @mariusafa
        link
        1
        edit-2
        1 year ago

        Nah, coming from data and signal processing fields, I think AI is overused by ppl that are incompetent. There are much more elegant, measurable and efficient ways of signal processing.

        Anybody can use AI, okay. But is still a shitty solution.

        • R0cket_M00se
          link
          fedilink
          English
          21 year ago

          How is a technology that depends on the competence of the user to blame when the user is incompetent?

          • @mariusafa
            link
            1
            edit-2
            1 year ago

            AI works like a black box for regular users and top AI researchers. That’s why is not a good design. As a researcher you cannot obtain direct information of what the AI model is doing inside. Just results.

            Idk why you all know AI so much now. You know that AI existed since the 80’s right? xD.

            Now everybody fan of it because we can waste ton of resources to run this technological abomination that’s basically the difference between now and then.

    • PM_ME_VINTAGE_30S [he/him]
      link
      English
      41 year ago

      I don’t! I want people who are concerned about the misuse of AI, particularly by corporations and world governments, to learn how to use AI to fight back against our oppressors or at least make AI-powered technologies that are helpful for common people, and to archive how it works, particularly how it fails.

    • Funkytom467
      link
      fedilink
      41 year ago

      Is that the real war on AI?

      Human capitalists fighting for their profit VS other Humans that fight for some rights of theirs.

      The age of digital war is really settling in now that even protesting is something we do with digital tool.

  • @Un4@lemm.ee
    link
    fedilink
    31 year ago

    Tried it on chat gpt 4, here what it thinks:

    This image is a collage that alternates between photos of Chihuahuas and blueberry muffins. The arrangement is such that it plays on the visual similarities between the two, with the muffins and parts of the Chihuahuas (likely their faces) mimicking each other in color and texture. This creates a humorous effect, as it can be initially challenging to differentiate between the muffins and the dogs.

    Starting from the top left corner and moving left to right, row by row:

    1. Blueberry muffin with spots resembling a dog’s eyes and nose.
    2. Face of a Chihuahua with a similar coloration to the muffin.
    3. Blueberry muffin resembling the face of a Chihuahua.
    4. Chihuahua face with a light fur color matching the muffin’s surface.
    5. Chihuahua face with dark eyes and nose similar to blueberry spots.
    6. Muffin with a pattern that resembles a Chihuahua’s facial features.
    7. Chihuahua with an expression and coloring that echoes the appearance of a muffin.
    8. Muffin with blueberries and coloring that looks like a Chihuahua’s face.
    9. Chihuahua with a facial expression and fur colors that mimic a muffin’s texture.
    10. Muffin with blueberries mimicking the eyes and nose of a Chihuahua.
    11. Chihuahua with features that resemble the spots on a muffin.
    12. Muffin resembling a Chihuahua’s face in color and texture.
    13. Close-up of a Chihuahua’s face with colors similar to a blueberry muffin.
    14. Muffin with a pattern of blueberries resembling a Chihuahua’s face.
    15. Chihuahua looking directly at the camera, with fur colors like a muffin.
    16. Two Chihuahuas close together, with their heads resembling muffin tops.
    • @letsgo@lemm.ee
      link
      fedilink
      21 year ago

      It’s likely seen the image before. Try randomising the image, reversing some of them, altering the gamma, and adding some noise. See how it does then.

      • @Un4@lemm.ee
        link
        fedilink
        11 year ago

        It did make errors, also the training of it is old it’s unlikely that it seen the images