

Looking at the API that fetched the candidate information, the researchers noticed that it contained an insecure direct object reference (IDOR) weakness, exposing an ID parameter that appeared to be the order number for the applicant. For the researchers’ application, that ID was 64,185,742.
This is super common. They are securing the thing that sends you the endpoint for the record, but not the API for getting the records themselves.
It’s kinda like saying “hey, the key to your room is in the box labeled 10” so you go to that box and grab your key. But you notice that there are boxes on the left and right of box 10, and those boxes contain the keys to other rooms.
No one ever told you that boxes 9 and 11 exist (the modicum of “security” the API provided), but all it takes to find them is knowing that you have a box and there was probably someone who got a box before you and after you.
It means they’re just incrementing the id by one for each record, you could get a little bit better using a GUID that isn’t sequential, but really you should only allow access to that record if someone has a valid credential.
In this specific situation it seems that they did have auth, but they left the testing store accessible with default admin passwords (123456) and that testing admin could then be used to access literally everything else.
“AI” being shorthand for “Actual Indians” once again.
The companies that don’t realize you just pretend to be migrating to LLM development while laying off workers to hire cheaper labor in other countries are gonna tank.
It’s so much more costly to run development on a massive super cluster than to just have a room of 10 developers somewhere in Bangaldesh bash out some working code in a day.
The other thing is that we’re seeing unprecedented levels of slop being pushed out to production codebases. Codebases that are gonna become the backbone of companies for decades to come. Someone will have to one day be able to understand the slop and fix it.