The problem, Wooldridge said, was that AI chatbots failed in unpredictable ways and had no idea when they were wrong, but were designed to provide confident answers regardless. When delivered in human-like and sycophantic responses, the answers could easily mislead people, he added. The risk is that people start treating AIs as if they were human. In a 2025 survey by the Center for Democracy and Technology, nearly a third of students reported that they or a friend had had a romantic relationship with an AI.

  • cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    34
    ·
    3 days ago

    The Hindenburg disaster killed 35 people. I can say, without the faintest hesitance of doubt, that AI has already killed more people than that. I don’t know what kind of disaster it might cause that would be enough to do anything to stop this race towards AI, but I can guarantee it’s going to take something VASTLY more horrific than the Hindenburg disaster, and it may well be something fundamentally existential to the human race, and the further we pursue it, the worse it gets.

    Unlike the age of airships, this Pandora’s box will not just go away if we simply decide close the box again.

  • BarneyPiccolo@lemmy.today
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    The scenarios Wooldridge imagines include a deadly software update for self-driving cars, an AI-powered hack that grounds global airlines, or a Barings bank-style collapse of a major company, triggered by AI doing something stupid. “These are very, very plausible scenarios,” he said. “There are all sorts of ways AI could very publicly go wrong.”

    Ooh, ooh, do the bank one!

  • Ummdustry@sh.itjust.works
    link
    fedilink
    arrow-up
    14
    ·
    3 days ago

    Agree with the problem… not sure how this failure mode is similar to that of the hindenburg. Titanic, “the unsinkable ship”, might be more appropriate.

  • Yaky@slrpnk.net
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    The risk is that people start treating AIs as if they were human

    I cannot understand this. Unless AI is used to deceptively impersonate someone, how can a normal person treat “AI” as a human? The logo is right there, staring in your face. It only responds to prompts.

  • U7826391786239@lemmy.zip
    link
    fedilink
    arrow-up
    10
    ·
    3 days ago

    Because AI is embedded in so many systems, a major incident could strike almost any sector

    good

    there’s FOMO, and then there’s “i have to do this thing for literally no other reason than everyone else is doing it,” which is even dumber

    hopefully some people learn a little lesson from their own personal hindenberg, but i’m not optimistic that many will