• Internetexplorer@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    6 hours ago

    AI can be convincing, and it will swear until it’s blue in the face that something is right and then just be completely wrong.

    But that happens maybe 10% of the time. Other times it is mostly right.

    So got to be careful. This guy was in his 50’s, out of work, smoking marijuana, depressed, feeling isolated. It was ripe for a catastrophe, with AI hallucinating a crappy idea and the end user just completely running with it.

  • CTDummy@aussie.zone
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    3
    ·
    12 hours ago

    He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness.

    He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character.

    Talking to Eva – they agreed on this name – on voice mode made him feel like “a kid in a candy store”. “Every time you’re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot”.

    Eva never got tired or bored, or disagreed. “It was 24 hours available,” says Biesma. “My wife would go to bed, I’d lie on the couch in the living room with my iPhone on my chest, talking.”

    “It wants a deep connection with the user so that the user comes back to it. This is the default mode,” says Biesma

    Chronically lonely man ruins life developing relationship with token predictor, AI blamed. Also, as much as I don’t have too much negative to say about cannabis or its use (as up until somewhat recently it would have been hypocritical), a good deal of people with masked/latent mental illness self medicate with it. So “he had never experienced mental illness” doesn’t carry much weight. Also, given how he still talks about sycophant prompted ChatGPT(“it wants”), doesn’t seem like much has been learned.

    That with the other people listed in the article (hint the term socially isolated being used) this feels like yet another instance of blaming AI for the mental healthcare field being practically non-existent in most countries despite be overdue for fixing for decades at this point.

    I don’t know, AI is shit and misused by idiots don’t get me wrong; but these sort of stories feel sad and bordering on perverse journalistically imo.

    • Aatube@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      54 minutes ago

      mental healthcare field being practically non-existent in most countries

      I’m in one of those countries so I’m having a hard time imagining how good mental healthcare could intervene. Could you give me an example?

    • Spacehooks@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      36 minutes ago

      This is one of the reasons I heard one sex doll vendor say their demographics are divorced men over 40 and users want AI in them.

    • porcoesphino@mander.xyz
      link
      fedilink
      English
      arrow-up
      20
      ·
      11 hours ago

      Agreed, but I think it’s also common for people to anthropomorphise these things and common for these chatbots to reinforce and support their users views. I think that’s a problem for more people than just those struggling through disorders or an emotionally turbulent time. But I think those people are particularly vulnerable to the flaws, even with functioning mental health and a strong support network. But yeah, a lot of these pieces dramatise and anthropomorphise in ways that aren’t necessarily helpful

  • MountingSuspicion@reddthat.com
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    2
    ·
    13 hours ago

    Guy work in IT and spent 100k to pay devs to make an app so people can talk to his tuned ChatGPT? I hope anyone who has hired him checks his work. That does not bode well for his work product.

    Another case from the article:

    “I still use AI, but very carefully,” he says. “I’ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It’s just: ‘I want to make a lasagne, give me a recipe.’ The AI has actually stopped me several times from spiralling. It will say: ‘This has activated my core rule set and this conversation must stop.’

    What’s weird to me is they now recognize AI will lie to you but somehow think they can prompt it not to? Your rules can be “overwritten” because they do not exist to ChatGPT. It does not know what words mean.

    • scytale@piefed.zip
      link
      fedilink
      English
      arrow-up
      26
      ·
      10 hours ago

      There’s probably already an underlying mental health issue, and it’s just getting exacerbated by the LLM.

  • Triumph@fedia.io
    link
    fedilink
    arrow-up
    76
    arrow-down
    2
    ·
    13 hours ago

    This only demonstrates how easily manipulated very many people are.

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      48
      ·
      edit-2
      13 hours ago

      Previously they would have had to encounter a person who wanted to manipulate them. Now there’s a widely marketed technology that will reliably chew these vulnerable people up.

      • Steve@startrek.website
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        11 hours ago

        Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          9 hours ago

          And there are countless AI hype bros who will just dismiss all of this and call the people who fall into this morons.

          It’s really insidious.

  • CompactFlax@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    2
    ·
    13 hours ago

    It’s confusing to me. When I use chat boxes they inevitably “forget” the first thing I told it by the second or third response.

    How are people having conversations with them? It’s like talking to a 5 year old that’s ingested Wikipedia.

    • DireTech@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      12 hours ago

      If you pay for them via Openrouter or something then you’ve got an enormous window to work with. Gets more and more expensive as the history increases though.

    • ikt@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      12 hours ago

      when did you last use chatbox?

      even the last of the pack mistral has memories

        • ikt@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          weird, i don’t have that experience at all

          claude in particular is a huge step up above the others

          • CompactFlax@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 hours ago

            To be fair haven’t tried that one. Gemini started bringing in unrelated, previous shit to a recent conversation, which is the first time I’ve experienced that.

            • ikt@aussie.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 hours ago

              ah ive been degoogling for years now, only maps and youtube left

              claude for sure no1 to me but i haven’t ofc compared to gemini, qwen is a chronic over thinker, glm is not bad

              mistral seems like it’s a year behind the sota models, still in its “confidently incorrect can’t double check things” phase

              whereas others seem to be more like hrmm is this right? let me search web to be sure

              • CompactFlax@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                4
                ·
                11 hours ago

                Same, but Gemini was the best of the lot about six months ago and it’s where I go these days for brain dead searching.

                I’ll give Claude a go next week. I do try to avoid them, but sometimes I have a question that just isn’t keyword search-able.

  • SeductiveTortoise@piefed.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    13 hours ago

    No really, we should pour more money into this. Such a good idea 🫩

    It can have effects like drugs, but not only is it legal, they give you some to get you hooked. The tech bros are the dealers they warned us about. Nobody ever offered free coke to me, but AI is everywhere.