- cross-posted to:
- news@lemmings.world
- news@hexbear.net
- politics@lemmy.world
- cross-posted to:
- news@lemmings.world
- news@hexbear.net
- politics@lemmy.world
Google and OpenAI staff, many of them AI researchers, have signed an open letter saying they share Anthropic’s red lines. Privately OpenAI bosses agree.
I found it kind of funny to see some people scrambling to cancel their ChatGPT subscriptions after OpenAI swooped in to take the contract Anthropic refused to, as if this is the first moral problem they’ve found with using ChatGPT, but what’s even more bizarre than that is seeing a post on this community celebrating an AI company.
We can not stop these companies by ourselves from being shitty, we can be happy when one of them chooses to be .01% less shitty than the rest.
I caution against the enthusiasm here. As I understand it, the complaint wasn’t that Anthropic didn’t want to make autonomous weapons so much as that they wanted to retain control over the systems once they were sold to the government.
No reasonable government should allow corporate control over their military assets, and frankly, I trust Anthropic with control over weapons even less than I trust the Trump administration.
Good discussion of this on HN: https://news.ycombinator.com/item?id=47188473
I haven’t read anyone say that it was about retaining the IP. Anthropic says, and others agree, that it would be totally irresponsible to use current frontier AI systems for lethal autonomous weapon systems – even if you think that LAWS are okay. Current AI systems are far too error-prone.
See https://www.anthropic.com/news/statement-department-of-war and https://www.anthropic.com/news/statement-comments-secretary-war
Good discussion of this on HN:
lol never.
No reasonable government should allow corporate control over their military assets
Pretty sure the entire MIC is privately owned and operated by capital for capital.
OpenAI’s Sam Altman says today that they’re gonna take Anthropic’s place on DoW classified networks (https://xcancel.com/sama/status/2027578652477821175#m).
Still, Anthropic being designated a “supply chain risk” is good news, as it means Claude cannot officially be used anymore by the Pentagon and by all of its suppliers. That’s massive.
Looks like the Friday deadline was set, because they needed anthropic for the attack?
Am I missing something? How would LLM be useful for operations like this?
An llm gives you the ability to put responsibility on a machine. A machine can’t be reprimanded for nuking an orphanage.
They believe their own hype. Whether an LLM would be useful or not is irrelevant to the perception that it will be useful.
Plausible deniability. For example, if Anthropic caved, today’s murder of 50 Iranian kids would probably be blamed on the AI fucking up. It would give them a scapegoat.
It’s a mix of that and these people genuinely believing this garbage is actually remotely useful.
Fuck Anthropic and all AI companies, but at the same time good on them for not bending the knee here.
As everyone else says, to avoid accountability. That’s the real “killer” app here.
Just look at use of “AI” in the ongoing gaza genocide, zio terror attacks on lebanon, etc.
This is already normal.
Props to Anthropic versus other grifters…
But they’re not holding any line. “AI” is being researched and developed almost entirely for the sake of imperialism, prisons, hating “china”, etc.
The letter: https://notdivided.org/





