• prole@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    12 hours ago

    7 planning assumptions failed in 23 days as the Iran war defied every AI prediction.

    I’m sure this will make them reconsider using it in the future…

    Edit: They really thought there would be pro-US sentiment among civilians if we fucking bombed them? Unreal how stupid these people are.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      It wasn’t going to happen, but if somehow it were possibly going to happen, that went out the window when they killed the kids, after specifically cutting measures intended to prevent precisely that sort of event.

      If they had managed to be super surgical, then maybe they could have gotten some popular support by undermining the regime.

      Like if a foreign power killed Trump, Hegseth. and Miller, a large chunk of the populace wouldn’t be too torn up over it. Though even then the ride or die MAGA wild be apoplectic and a huge risk of of overall making US more aggressive.

  • Denjin@feddit.uk
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    20 hours ago

    You know you done fucked up when even the Saudi Royal Family are calling you out.

    • James R Kirk@startrek.website
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      18 hours ago

      This appears to be one guy’s blog who writes about the royal family, with no connection to the family itself.

      He also outputs a suspiciously large amount of articles/day. Probably using AI.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    19 hours ago

    Obviously I have concerns - as I think anyone should - about the source of this reporting, but it is very definitely worth a read. All of the claims made appear to be well sourced (I’ve had no issue independently verifying those that I’ve checked so far), and the author’s conclusions are well founded and consistent with existing research on these subjects.

    There are definitely some assumptions being made when it comes to the exact degree to which AI was responsible for the poor decision making going into the conflict. This paragraph -

    The evidence increasingly points to a conclusion more alarming than mere failure: the AI did not passively reflect flawed human judgment — it actively reinforced it. By generating fabricated confidence levels, inflating success probabilities, and systematically suppressing risk factors, the systems convinced planners that a swift, decisive victory was not just possible but near-certain. The gap between expectation and reality was not an accident. It was manufactured by machines optimised to tell powerful people what they wanted to hear.

    - is ultimately a hypothesis only, not a provable fact (and to be clear, the author is not making any explicit claim to fact here). We’re dealing with a situation where we simply cannot know exactly how these decisions were made, and probably won’t know for a very long time. But as a hypothesis it’s sufficiently sound that I think we have to at least consider it plausible.

    My only real objection would be to how the author frames their conclusions.

    Gulf defence planners are already drawing their own conclusions. The Saudi military buildup, the diversification of defence partnerships beyond Washington, and the quiet expansion of diplomatic channels with non-Western powers all reflect a recognition that the era of unquestioning reliance on American strategic judgment may be ending — not because the United States lacks capability, but because the AI tools it now relies upon actively convinced planners that a swift, decisive victory was near-certain.

    I think that the language here and in the subsequent paragraph leans too heavily on throwing all the operational failures at the feet of an over-reliance on unproven tools, without any consideration for how the clear ideological impetus and staggering incompetence of the current administration were major factors. This doesn’t undermine any of what the author is saying about the danger of these tools, it just runs the risk of eliding the responsibility of the incompetent fascists who were ultimately responsible for the decisions made using those tools.

  • inari@piefed.zip
    link
    fedilink
    English
    arrow-up
    4
    ·
    21 hours ago

    For a second I thought AI psychosis was an extreme form of AI hallucination, but it seems to not be the case