IT DIDN’T TAKE long. Just months after OpenAI’s ChatGPT chatbot upended the startup economy, cybercriminals and hackers are claiming to have created their own versions of the text-generating technology. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.

  • @CanadaPlus
    link
    2
    edit-2
    10 months ago

    I’m not sure being used for a stated purpose (like generating code) in a way that you just don’t agree with counts as a “vulnerability”, though. Same thing as me using a drill to put a hole in a person; that’s not a malfunction, I’m just an asshole.

    We’re talking about making an AI which can’t be misused at this point, and of course that’s a famously hard problem, especially when we don’t really understand how the basic technology works.