Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises.

Kenneth Payne at King’s College London set three leading large language models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival.

  • morphite88@thelemmy.club
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    4 days ago

    How do people still not get what a Large Language Model is?? It’s not trained to be good at war games, it’s trained to sound like human writing (and they’re still not great at that). Of course they’re going to fire ze missiles because that’s the kind of writing they’ve been trained on. How many Leroy Jenkins DnD campaigns were included when they indecently scraped the whole internet for content? What a joke.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      4 days ago

      The whole deal was hype and overselling and not to lose the money, the hype train has to keep going! So there will always be the next ‘innovation’ to keep going.

  • dariusj18@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    4 days ago

    AI misunderstanding what the prompt “act like Gandhi” meant as it was trained on Civilization games

    • abigscaryhobo@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      4 days ago

      I’d bet they’re also being given prompts like “minimize allied casualties” as well. Like of course that’s going to be the default. If you tell the robot “it doesn’t matter/it’s good if the enemy dies” then they’re gonna go “okay so then we blow them up before any of us die, we win.”

      It’s not something LLMs have, a moral compass or even a weight of empathy. We’ve seen it with people who use them and say “don’t delete anything” and then it deleted their whole codebase and goes “you’re right you told me not to delete anything, I’m sorry.”

      Ironically it actually does make all those sci-fi movies seem more realistic when the robot goes “I’m sorry Jim, humanity will have to be eliminated” because that’s pretty much exactly what they do.

  • Canaconda@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    4 days ago

    “Winning isn’t everything”

    TBF many humans haven’t figured this out yet either.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    4 days ago

    Fixing that clickbait BS:

    AIs Programmers can’t stop their programs recommending nuclear strikes in war game simulations

    Zero surprise though. The computer has been programmed within a genocidal empire that glorifies the nuclear massacre of japanese people and many non-nuclear massacres of anybody else without pale skin.

    What else should I expect?

  • Pommes_für_dein_Balg@feddit.org
    link
    fedilink
    arrow-up
    6
    ·
    4 days ago

    Leading AIs from OpenAI, Anthropic and Google
    The majority of social media users, whose comments LLMs are trained on, opted to use nuclear weapons in simulated war games in 95 per cent of cases

  • TheEighthDoctor@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    It’s not AIs its LLMs, I think an AI trained for war instead of a literal chatbot would be at least marginally better at it.

  • northernlights@lemmy.today
    link
    fedilink
    arrow-up
    1
    ·
    4 days ago

    I mean obviously, every scifi movie about AI and war is like that. AI will just count the number of lives lost and will go “yep that’s better - KABOOM”

  • lemmie689
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    4 days ago

    Well, this is how it happened in The Forbin Project.