Google and OpenAI staff, many of them AI researchers, have signed an open letter saying they share Anthropic’s red lines. Privately OpenAI bosses agree.

  • thesdev@feddit.org
    cake
    link
    fedilink
    English
    arrow-up
    15
    ·
    16 hours ago

    I found it kind of funny to see some people scrambling to cancel their ChatGPT subscriptions after OpenAI swooped in to take the contract Anthropic refused to, as if this is the first moral problem they’ve found with using ChatGPT, but what’s even more bizarre than that is seeing a post on this community celebrating an AI company.

    • Turret3857@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      We can not stop these companies by ourselves from being shitty, we can be happy when one of them chooses to be .01% less shitty than the rest.

  • Daniel Quinn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    19 hours ago

    I caution against the enthusiasm here. As I understand it, the complaint wasn’t that Anthropic didn’t want to make autonomous weapons so much as that they wanted to retain control over the systems once they were sold to the government.

    No reasonable government should allow corporate control over their military assets, and frankly, I trust Anthropic with control over weapons even less than I trust the Trump administration.

  • BallyM@lemmy.worldOP
    link
    fedilink
    arrow-up
    8
    ·
    18 hours ago

    OpenAI’s Sam Altman says today that they’re gonna take Anthropic’s place on DoW classified networks (https://xcancel.com/sama/status/2027578652477821175#m).

    Still, Anthropic being designated a “supply chain risk” is good news, as it means Claude cannot officially be used anymore by the Pentagon and by all of its suppliers. That’s massive.

  • takeda@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    24
    ·
    22 hours ago

    Looks like the Friday deadline was set, because they needed anthropic for the attack?

    Am I missing something? How would LLM be useful for operations like this?

    • HappyFrog@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      14
      ·
      18 hours ago

      An llm gives you the ability to put responsibility on a machine. A machine can’t be reprimanded for nuking an orphanage.

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      30
      arrow-down
      1
      ·
      22 hours ago

      They believe their own hype. Whether an LLM would be useful or not is irrelevant to the perception that it will be useful.

    • nfreak@lemmy.ml
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      17 hours ago

      Plausible deniability. For example, if Anthropic caved, today’s murder of 50 Iranian kids would probably be blamed on the AI fucking up. It would give them a scapegoat.

      It’s a mix of that and these people genuinely believing this garbage is actually remotely useful.

      Fuck Anthropic and all AI companies, but at the same time good on them for not bending the knee here.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      14 hours ago

      As everyone else says, to avoid accountability. That’s the real “killer” app here.

      Just look at use of “AI” in the ongoing gaza genocide, zio terror attacks on lebanon, etc.

      This is already normal.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    14 hours ago

    Props to Anthropic versus other grifters…

    But they’re not holding any line. “AI” is being researched and developed almost entirely for the sake of imperialism, prisons, hating “china”, etc.