- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors
Maybe LLMs should be used cautiously. Maybe companies shouldn’t be directed by executives who have bought into the “AI” hype train without fully understanding it.



That’s the thing: we keep hearing “AI is a tool”, “AI is a tool”, “AI is a tool” in an effort to legitimize and rationalize its use. But in the wild, it’s clearly being used as a human replacement strategy.
it is. for these companies it’s not a tool, it’s a replacement. I contract for a lot of startups and small to medium sized companies and they all use AI/LLMs as a go to end to end builder. not a tool, just straight up utilize it to build from beginning to end with prompts supplied by whatever junior dev or college intern they have on staff. None of it is verified once the build is complete. Just immediately pushed to production. So why don’t they check it? well because most places laid off the people who could check it or were half way decent at code review.
So what’s Amazons excuse? pretty much the same. keep senior staff to a minimum and rely on fresh college grads. they’ve always been like this. So them not verifying anything an AI wrote isn’t surprising. This is the same company that will force all new grads to be on call for a week straight expecting them to solve issues on their own at 3am.
Well. They’re trying to use it as human replacement. They keep finding out that they can’t.
now they think they can save money by using desperate graduates with little to no experience.