LLMs automated most phases of the attack A digital intruder broke into an AWS cloud environment and in just under 10 minutes went from initial access to administrative privileges, thanks to an AI speed assist.…

      • eleijeep@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        From the report that’s the source of this Register article (emphasis added):

        The threat actor infiltrated the victim’s environments using valid test credentials stolen from public S3 buckets. These buckets contained Retrieval-Augmented Generation (RAG) data for AI models , and the compromised credentials belonged to an Identity and Access Management (IAM) user that had multiple read and write permissions on AWS Lambda and restricted permissions on AWS Bedrock. This user was likely intentionally created by the victim organization to automate Bedrock tasks with Lambda functions across the environment.

        It is also important to note that the affected S3 buckets were named using common AI tool naming conventions, which the attackers actively searched for during reconnaissance.

        https://www.sysdig.com/blog/ai-assisted-cloud-intrusion-achieves-admin-access-in-8-minutes

    • dgdft@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      It’s absolutely one of the strongest applications of LLMs right now.

      Very interested to see how things develop long-term though, since theoretically we should start seeing red team tools developed that can close the holes an attacker would be hunting. Granted, I think we’ll need at least another five years for true high-quality pentest agents, and offense will have the upper hand in the cat & mouse until then.