I don’t get why every fucking company is hell-bent on introducing an absolute dogwater system that does nothing but introduce security vulnerabilities to whatever poor machine intelligence is forced to download that garbageware
In this case the AI really didn’t need to help them. Using admin privileges with username & password 123456 is smooth brained, and having incremental IDs that aren’t only visible to particular restaurants is an absolute amateur omission that would’ve been caught by any developer with more than a year of experience. Like, that that kind of shit gets through review means nobody is looking at them, and that it was written in the first place pointa straight to vibe coding without an ounce of understanding.
McHire
McKill me.
In the face of objections from McDonald’s, the term “McJob” was added to Merriam-Webster’s Collegiate Dictionary in 2003. In an open letter to Merriam-Webster, McDonald’s CEO, James Cantalupo denounced the definition as a “slap in the face” to all restaurant employees, and stated that “a more appropriate definition of a ‘McJob’ might be ‘teaches responsibility’”. Merriam-Webster responded that “[they stood] by the accuracy and appropriateness of [their] definition.”
On 20 March 2007, the BBC reported that the UK arm of McDonald’s planned a public petition to have the OED’s definition of “McJob” changed. Lorraine Homer from McDonald’s stated that the company feels the definition is “out of date and inaccurate”.
No Lorraine Homer, it is not “out of date and inaccurate”.
It’s fuckin evergreen is what it is.
I don’t know much about networking like this but wouldn’t you keep sensitive information like job applicant data in different, secure part of the network from the AI chatbots so they don’t have access to it?
The chatbot didn’t have that access, it was an API endpoint that would let you enter sequential user IDs to get full authentication as any user.
Only if you want to devote resources and spend money to do things in a secure and correct manner
Looking at the API that fetched the candidate information, the researchers noticed that it contained an insecure direct object reference (IDOR) weakness, exposing an ID parameter that appeared to be the order number for the applicant. For the researchers’ application, that ID was 64,185,742.
This is super common. They are securing the thing that sends you the endpoint for the record, but not the API for getting the records themselves.
It’s kinda like saying “hey, the key to your room is in the box labeled 10” so you go to that box and grab your key. But you notice that there are boxes on the left and right of box 10, and those boxes contain the keys to other rooms.
No one ever told you that boxes 9 and 11 exist (the modicum of “security” the API provided), but all it takes to find them is knowing that you have a box and there was probably someone who got a box before you and after you.
It means they’re just incrementing the id by one for each record, you could get a little bit better using a GUID that isn’t sequential, but really you should only allow access to that record if someone has a valid credential.
In this specific situation it seems that they did have auth, but they left the testing store accessible with default admin passwords (123456) and that testing admin could then be used to access literally everything else.