you can increase its accuracy by changing the parameter type to long
My favorite part of this is that they test it up to 99999 and we see that it fails for 99991, so that means somewhere in the test they actually implemented a properly working function.
I have seen that algorithm before. It’s also the implementation of an is_gay(Image i) algorithm with around 90% accuracy.
I’m struggling to follow the code here. I’m guessing it’s C++ (which I’m very unfamiliar with)
bool is_prime(int x) { return false; }Wouldn’t this just always return false regardless of x (which I presume is half the joke)? Why is it that when it’s tested up to 99999, it has a roughly 95% success rate then?
I suppose because about 5% of numbers are actually prime numbers, so false is not the output an algorithm checking for prime numbers should return
Oh I’m with you, the tests are precalculated and expect a true to return on something like 99991, this function as expected returns false, which throws the test into a fail.
Thank you for that explanation
That’s the joke. Stochastic means probabilistic. And this “algorithm” gives the correct answer for the vast majority of inputs
Has the same vibes as anthropic creating a C compiler which passes 99% of compiler tests.
That last percent is really important. At least that last percent are some really specific edge cases right?
Description:
When compiling the following code with CCC using -std=c23:bool is_even(int number) { return number % 2 == 0; }the compiler fails to compile due to
bool,true, andfalsebeing unrecognized. The same code compiles correctly with GCC and Clang in C23 mode.Well fuck.
If this wasn’t 100% vibe coded, it would be pretty cool.
A c compiler written in rust, with a lot of basics supported, an automated test suite that compiles well known c projects. Sounds like a fun project or academic work.
any llm must have several C compilers in its training data, so it would be a reasonably competent almost-clone of gcc/clang/msvc anyway, right?
is what i would have said if you didn’t put that last part
LLMs belong to the same category. Seemingly right, but not really right.
l/whoosh
The error is ~1/log(x), for anyone interested.
If you think this is bad and not nearly enough accuracy to be called correct, AI is much worse than this.
It’s not just wrong a lot of times or hallucinates but you can’t pinpoint why or how it produces the result and if you keep putting the same data in, the output may still vary.
It has actually 100% accuracy
95% of the time
Can we just call the algorithm sex panther and move on?




