• halfdane@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    This wasn’t even a prompt-injection or context-poisoning attack. The vulnerable infrastructure itself exposed everything to hack into the valuable parts of the company:

    Public JS asset  
        → discover backend URL  
            → Unauthenticated GET request triggers debug error page  
                → Environment variables expose admin credentials  
                    → access Admin panel  
                        → see live OAuth tokens  
                            → Query Microsoft Graph  
                                → Access Millions of user profiles  
    

    Hasty AI deployments amplify a familiar pattern: Speed pressure from management keeps the focus on the AI model’s capabilities, leaving surrounding infrastructure as an afterthought — and security thinking concentrated where attention is, rather than where exposure is.

    • halfdane@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Seems like you’re talking about a different article: there was no context-poisoning, or in fact even anything LLM specific in this attack.

      • Tiff@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        22 hours ago

        I guess that’s why the have BotAccount turned on. They are a “bot account”. Their username is also very telling.