The cemetery of Minab, photographed as it prepares to bury more than 100 of the town’s young girls, is one of the defining images of the US-Israeli war on Iran, bluntly capturing the devastating civilian toll.
But is it real?
Ask Gemini, the AI service powered by Google, and the answer you receive is no – in fact, Gemini claims the photograph is from two years earlier and more than 2,000km (1,240 miles) away. Rather than graves for small girls killed by a missile, the image “depicts a mass burial site in Kahramanmaraş, Turkey” after the 7.8 magnitude earthquake that struck in 2023. “This specific aerial perspective became one of the most widely shared images of the disaster,” Gemini says, “illustrating the sheer scale of the loss.”
The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.
Ask Gemini, the AI service powered by Google
How about we don’t rely on AI to say if something is AI?
How about we take the time to train our actual brains to spot the difference?
Like, that’s the silver lining of dlss 5 for me. Theyre just slapping that stupid AI filter and getting a bunch of blowback, but there was a very brief window where people kept using the same filter on normal pictures.
Not only is that shit useless, it’s functionally dangerous because that blurs the lines. It picks up all the hallmarks of AI, giving false positives that an image isn’t real.
I’m hoping the dlss 5 blowback will kill the AI upscaling of random modern images. And people actually start using their fucking brains.
I mean yeah thats the entire point of the article. All the slop machines were wrong and actual manual research was required to find the truth.
The cemetery image, it turns out, is authentic. Researchers have cross referenced the photo of the site with satellite images that confirm its location, and it can be cross-referenced again with dozens more images taken of the same site from slightly different angles, and again with video footage – none of which experts say show signs of tampering or digital manipulation. The “factchecks” by Gemini and Grok are just one example of a tidal wave of AI-generated slop – hallucinated facts, nonsense analysis and faked images – that are engulfing coverage of the Iran war. Experts say it is wasting investigative time and risks atrocities being denied – as well as heralding alarming weaknesses as people increasingly rely on AI summaries for news and information.
It’s incredible that the masses have embraced so fully a technology that makes them dumber, as well as the internet itself. It’s as if, in the 90s, we decided that we should embrace chain emails as the way forward, rather than embracing Wikipedia.
While growing up I used tech to make myself more intelligent, I wanted to understand how it worked and functioned. Least on a software level I’m a programmer not hardware (though may do some small projects to see at some point).
I always wonder why we fell so far, like from growing up I needed to figure out computers and ensure I could use them to their fullest. These days appears most people of a similar age don’t care cause it works.
While I see that point. it hasn’t worked that long and at this point we seem to have people not knowing where to save a word document. That’s from 3rd party stories but seems believable. So we went from some of the most curious people understanding their tools to people going ‘meh it works’ without understanding a damn thing.
It only worked decently cause the operators understood it and we put in our time. Shit did I just get a reason for Matrix operators name? Like a dual meaning besides the phone one. Huh never thought of that but back in the day operators had some answer usually…
we asked AI about the AI generated image to check if it was AI generated
we asked AI about the image to check to see if it was AI generated
we asked AI about the image to verify claims of the context
all of these have one fatal flaw that completely misses the point.
we asked AI

Yes, that flaw is exactly what they’re pointing out. They’re saying, “we asked AI a question we knew the answer to and it was wrong.”
Leaving out the context is bad.
the fact that we needed an article to “ask AI” anything to prove that asking AI anything is a bad idea only solidifies the resolve in my colon that I want to leave this planet forever.
Does it matter? It happened, and has been widely confirmed.
the news story is about how LLMs are resulting in people being fed false “fact-checks”, being told the image is fake when it’s not
it’s not strictly about whether the image is real or not, which the facts are clear about
Ok good point.
It absolutely matters if AI images of war are being passed off as real.
Misinformation and propaganda were already a huge issue and AI just ramped it up.
Yeah I dont trust ai for any reason

Could have just made an accurate and concise post, but noooo engagement bait it is.
I just tested it, gemini says:
What it depicts: The photo shows an aerial view of newly dug graves at a cemetery in Minab, Iran. These graves were prepared in early March 2026 for the victims of the February 28, 2026, airstrike on the Shajareh Tayyebeh Primary School.
Ironic that an article about ai misinformation is itself misinformation. Maybe it was written by ai?
You can give Gemini the exact same prompt and context 100 different times and you might get 95 very similar responses and 5 wildly different responses.
I don’t understand why people think a random text generator can ever be relied on for truth. It has no concept of truth. It is a random text generator. A pretty consistent one, but still fucking random. It has no intelligence. It is not intelligent. Stop acting like it is. Its conclusions are meaningless. They do not contain actual meaning. They are random.
You can force it to return the same answer each time. There’s a setting when you call the “temperature”, and it’s basically “how much leeway do you want to give it for ‘creativity’”. But then it’s more apparent when it has no clue, so they change the temperature to allow for different answers and combine that with RAG to get better overall answers.
It’s still a fancy autocomplete, but the “why” can be interesting
I agree, they’re an extremely interesting technology. But laypeople are not going to understand why they’re interesting no matter how carefully you phrase it, I’m not trying to convince people who understand what they are that they’re not interesting and that they don’t have real potential and real applications.
I am trying to convince laypeople that they’re being misled (for profit) into believing these things are intelligent, can do things humans can do, and are capable of making decisions. I would rather have laypeople believing these are stupid atrocities against humanity (which is, in the current situation, closer to the truth) than I would bother trying to explain to them why it is still an interesting technology. If it ends up being completely banned (ha, fat chance) I’m not going to cry for it. I would rather have humanity protected from this vile, dishonest, and dangerous schemes they are using this technology for, even if it comes at the cost of ever being able to use this technology for good. My interest in it does not outweigh the harm that people are choosing to do with it.
oof, might need to work on your tech and media literacy there 🫠







