- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
- cross-posted to:
- lobsters@lemmy.bestiver.se
- auai@programming.dev
I’m tired of hearing about vibecoding on Lobsters, so I’ve written up three of my side tasks for coding agents. Talk is cheap; show us the code.
I especially love the third task because that’s exactly the kind of shit you get thrown on your plate in the field as a SWE.
There’s been some work an old member of our team did a year ago. No one remembers what it was, but it is important. Please do something.
That’s almost exactly one of the first tasks I got as an intern when I was starting out. THIS is what they are saying LLMs are going to replace.
It occurs to me that this audience might not immediately understand how hard the chosen tasks are. I was fairly adversarial with my task selection.
Two of them are in RPython, an old dialect of Python 2.7 that chatbots will have trouble emitting because they’re trained on the incompatible Python 3.x lineage. The odd task out asks for the bot to read Raku, which is as tough as its legendary predecessor Perl 5, and to write low-level code that is very prone to crashing. All three tasks must be done relative to a Nix flake, which is easy for folks who are used to it but not typical for bots. The third task is an open-ended optimization problem where a top score will require full-stack knowledge and a strong sense of performance heuristics; I gave two examples of how to do it, but by construction neither example can result in an S-tier score if literally copied.
This test is meant to shame and embarrass those who attempt it. It also happens to be a slice of the stuff that I do in my spare time.
Let’s see if you get any takers.
There’s already a couple of 'em - one of 'em, as expected, is being a sneerable little shit:

I’ve started grading and his grade is ready to read. I didn’t define an F tier for this task, so he did not place on the tier list. The most dramatic part of this is overfitting to the task at agent runtime (that is, “meta in-context learning”); it was able to do quite well at the given benchmark but at the cost of spectacular failure on anything complex outside of the context.
Vibe coding is of course a less than optimal process for the kind of tasks you’ve specified here
Oh no, tasks that have actual concrete outcomes and requirements! Vibe coders biggest nemesis!
Here’s something. It doesn’t follow your rules.
Then why did you submit it, dipshit?
Given your tone in these posts it seems unlikely to meet the kind of standards you are looking for.
That “kind of standards” being basic competence.
it was worth trying to start from my phone





