Good bot.
I dont even understand how it was supposed to save time or anything, since it was instructed to ask about every email. Why not just go through your emails in the first place and delete the ones you dont need.
Also its failed company culture if everyone keeps their day busy by just loooking through unecessary emails all day.
https://www.linkedin.com/in/yutingyue
What a fucking idiot. Wharton grad like the president.
LOL
LOL indeed. You’d think this guy would know, but apparently even the world’s most high paid engineers are drinking the slop koolaid.
This is why you don’t give AI access to any data that is not backed up offline, or you’re willing to lose. Really, you shouldn’t give AI access to any personal or non-work related data unless it’s local only. Capitalism will use your data to exploit you.
This is why I’ve transitioned to ZFS. So that I can have auto-snapshots and essentially version control my data; free to yolo with self-hosted apps and (local) AI.
LMAO even
My mind is blown every time I read that a major company has unleashed an “AI Agent” on production systems and code. Like, did everyone in the IT industry suddenly grow stupid and forget the most basic rule? Always sandbox and test. You never ever fuck around on production systems!
I guess setting up a test environment would take too much time and these chucklefucks must move faster and break more stuff.
LLM output is not deterministic. WTF good is testing gonna do for you when it could just do something randomly different the next time anyway?
Letting the thing execute commands on its own, without having the human read and confirm them first, is just fundamentally idiotic and insane. No amount of testing can change that!
LLM outputs not deterministic
I think this needs to be called out much more. IT, by its very nature is meant to consist of repeatable, verifiable processes and outputs. That is how a lot of the trust around the process is built.
Now you’re basically trying to tell people: Trust a system that can only reproduce the same results 98-99% of the time. For some that may be fine, but it’s going to become more of a problem as time goes on.
LLM outputs are 100% deterministic.
If you enter the same prompt with the same seed you will get the same vector outputs.
Chatbots take those vector outputs and treat them as a distribution and select a random token. This isn’t a property of the LLMs, it’s a property of chatbots.
You know this saying in ICT: Everyone has a development environment, a lucky few also have a separate production environment.
I witnessed it first hand on IBM, three in the morning, troubleshooting a database problem for a big client. Engineer writes up a script to try and solve the issue, I was the systems operator. Tells me to just run it on the mainframe.
“Wait, was this tested at all?”
“Client authorized it, they just want the downtime gone. Send it.”
So I just ran an untested script that fundamentally changed everything on the production database, written by a sleep deprived engineer that just wanted to go back to sleep. Granted, it worked, that one engineer was an old rockstar who had been with the client for over a decade. But the next three weeks were dedicated to tiptoeing around the changes of this one script and testing everything, in production, to make sure that the solution was viable long term and it didn’t break anything unseen. We all knew better, but everyone agreed and did it anyways.
IT loses the battle when the C suite says “do it now”.
IT Ops Manager here. I was told by C Suites I was becoming “difficult to work with” in my attempt to slow and control the constant deployment of AI into every aspect of the business.
Like, did everyone in the IT industry suddenly grow stupid and forget the most basic rule?
Thing is, in any industry, you need a combination of new blood and old wisdom in order to successfully pass the torch to the next generation. Old wisdom is expensive to keep around, but the cheap new blood doesn’t know what they need to in order to succeed.
When you get rid of all your old wisdom and hire all new blood to cut costs, they’re going to come in with a series of footguns that old wisdom knows how to avoid. If you’re lucky, the new blood is going to learn about those footguns primarily by shooting themselves with them and then scrambling to fix the big problem that follows. If you aren’t lucky, said footgun blows the entire leg off your corporation and you implode, do not pass Go, do not collect $200.
All this to say, no, they probably don’t know. A million companies elected to excise all of their knowledge and replace it with fresh-faced, eager, noticeably cheaper juniors.
Now there’s nothing wrong with hiring juniors, but you can’t just put 30 of them in a room and say “alright, monkeys, get to writing Shakespeare” - they lack 30+ years of practical knowledge, and as mentioned, juniors all ship with footguns pre-installed. You need someone who is able to steer the ship properly. A good senior dev is worth his weight in gold. However, most companies don’t want to pay a senior dev his weight in gold. Observe the consequences.
We need a new community for this stuff: c/aiatemyface? Or, c/aibitmyass?
aideservedthis
Damn that’s good
AmAItheAsshole?
Fuck_AI exists and is perfect.
c/aids, AI Deficiency Syndrome.
I mean, this is appropriately leopards ate my face enough to post there.
We’re probably not far out from having enough of these stories to overwhelm the
subcommunityEdit: fuck, old habits die hard
The real AI singularity is always in the comments.
What about the portmanteau of those, c/aiatemyass
AGI any minute now, we are curing cancer boys.
Other commenters suggested (…) adding a second OpenClaw to monitor the first one.
There was an old lady who swallowed a fly…
Gastown
I would suspect they wanted to delete that information and used the ai as an excuse, same as all the other uses of ai, in healthcare, previously in UI fraud in michigan under taurus, in the UK with their postal offices for accounting fraud at regional centers, etc.
It’s only real use so far has been doing things they aren’t allowed to do that they can cover up and then blame on the ai after they are caught and have no one get in trouble.
If you grant an AI delete privileges on your email, it’s going to delete your email. Makes you wonder what the qualifications for “AI Alignment director” are.
Oh no!
Anyways.
The issue with these stories is AI brain people don’t read them like normal people. This is a funny story for her not a total derision of everything she’s working on. The inherent lack of safety or control is a a feature, computer man do funny thing is a selling point.
Haha fuck yeah AGI but it’s an idiot
We already have Natural General Idiocy!
Could not happen to nicer guys :)
deleted by creator













