I work in an environment where persuasion and synthesis of vast amounts of information gives a major edge. I see 2 types of people. There’s those who are actually really good at what they do without help of LLM’s who can benefit by making their output even better by use of AI, by honing and optimizing their work, and there’s those who are absolutely shit without use of LLM’s who’re even worse once they start using it.
Unfortunately the latter group is the vast majority.
The first group already has strong ideas, and then the LLM can accelerate and elevate their thinking. They use it as a brainstorm helper. They validate the output. They don’t necesarrily work faster.
The second group doesn’t know what to do, will ask the LLM, trust the output with little to no scrutiny. They use it as a means of production. They deliver fast.
I think this pattern we see in most fields. Software development for example. A true senior developer might be able to create better output, or produce things a bit faster even. But a bad programmer will still have bad output, and probably exponentially so when they lean more into the tool.
The second group is dangerous. They’re as delusional as the output the LLM’s tend to generate. They feel empowered, and see the increase in output as a personal victory, as if it unlocked some lingering quality in them that was always there. Qualities that highly capable people had to work for years for to attain. Look how productive I am, look at what I did, they’ll think. They create the noise that capable people have to now deal with, it’s all the slop we see, and it’s everywhere.
That’s what I hate about it.
Anyway
This is the kind of stuff that convinces me that Western academia is about to slam into a brick wall and die
critical support
No, learning things in school and doing scientific research is good. Letting that get destroyed by a couple of Silicon Valley olicharchs is bad.
Obviously the Byzantine admission system and absurd tuition fees in the US (and UK) are horrific, but that’s not what’s being destroyed here.
It already has imo
I felt that way when I was 18, and knew more about certain topics than my professors did because I had the internet. Also, I remember realising that education was more about tolerating bureacracy than actually knowing material, when I was 10.
Sheesh, the US education system stucks
We’re witnessing the death of academia in real time. Knowledge acquisition will cease and we will descend into a pit of regurgitated slurry until this system collapses
It kinda needs to happen in a lot of ways. I like academia on, like, a conceptual level, but “publish or perish” and the reproducibility crisis are imo signs of a deeply entrenched problem and I am not convinced it can be solved by reform. The breakdown of liberal academia is probably as inevitable and necessary as the breakdown of capitalism and liberalism.
LLMs would just make the reproducibility crisis much worse.
Definitely. I wouldn’t be shocked if they play a role in the failure of liberal academia.
we will descend into a pit of regurgitated slurry until this system collapses.
I guess that’s this century in a nutshell.
mmmmmm regurgitated slurry

spent so much time trying to make the computer learn things they forgot how humans learn things
this is part of “everyone is twelve”. very serious academics going “this is fantastic. I can skip eight weeks of school!”
Cargo cult behavior. Churn out 50 slop papers you maybe skim over and no one else reads or attempts to replicate. Feed the slop back into the slop machine to shit out a thesis. Congrats you’ve got your doctorate without learning anything or generating anything of value!
That’s what really gets me! I see the Grammarly commercials where they say they can just follow the AI to improve/write their papers and get the grade they want. Cool, but have you considered the grade isn’t the end goal? Like, maybe the assignment was to teach you something and by not learning it you have harmed your studies? Maybe getting a lower grade and some feedback would assist you?
there’s a reproducibility crisis in several fields and you don’t get money for publishing negative results
I recently tried using an LLM to find out whether a niche issue in my thesis had already been discussed in the literature. I fed the LLM extremely specific prompts, specific enough, in fact, that it could actually cough up a result that looked similar enough to my problem that I first thought that it had actually found literature on my question. The problem: the literature either did not exist, even though the authors it was attributed to are contributors to my field, or it does exist but does not contain the answer the LLM gave. I know because I had read literally every paper the LLM spat out that actually exists. These machines are ok at some simple tasks like giving a general overview over the current literature in a field, but miserably fail anything more specific than that.
The way I think about it is: The more frequently the correct answer to a question has been given on the internet, the more reliable an LLM is to give that correct answer to that question. So it’s pretty reliable on surface-level questions in a vast array of fields. But the more specific and niche you get, the less explored the topic you’re asking the LLM about is, the more likely it is to just make stuff up.
I always thought about it like Wikipedia. Is it a good overview of the concept?
Usually, yeah.
Is it accurate/reliable enough to quote or use for anything important?
God no
Trust me, it’s like this for every field; geology, programming, history, story writing, philosophy
I have made use of it, I do regularly use it, but to not acknowledge it’s fucking shit and should not be put near any serious work without the up-most scrutiny is a joke
And I believe the propagators of AI lack either the skills needed to actually tell how bad it is, or want to believe otherwise because it makes it so much easier for them
I ask it questions about the godot game engine now and then and 100% of the time it will make something up that requires me to detangle its response
LLMs are a remarkable improvement on googles “I’m feeling lucky” button
I’m coming up with 500 theses every hour and they’re all wrongJust keep prompting. You’ll get there.
But who’s going to tell me when it’s right? Maybe I’ll have grok check Claude’s work…

The AI Centipede
There is something very funny about sociology research being written by the stolen words of m/billions of people being smashed together. It’s almost avant garde.
Ah yes, why bother learning all that pesky “medical knowledge” when training to become a doctor, when you can just get an AI to do all the work for you! I’m sure this sort of attitude will have no real world repercussions!
Congratulations on spending $200,000 at Harvard and completing your PhD. Unfortunately you learned literally nothing.
At this point, the only way to save higher learning is to go back to exclusively oral teaching.
Turns out Socrates was right all along.
A lot of professors I know are pivoting back to hand written proctored exams, oral presentations/q&s, etc because there’s really no stopping the slop machine. A lot of professors are uncomfortable with doing something like reporting tons of students for cheating since you can’t prove it easily, so that’s their alternative.
Except one CS professor I know who failed 30% of his class on an exam, reported them all to student conduct, and sent the rest of the class a warning lol. He ain’t having it.
The uni I attended is (depressingly) embracing LLMs and even they didn’t stop in person exams…
To some extent, you have to embrace it. Students are going to use it anyway and the institution isn’t going to let you fail 50% of your class every semester. There are good ways and bad ways to do it though and some professors are assigning things that try to get people to reflect on their AI usage, like asking multiple LLMs a question and comparing/contrasting their answers to pick them apart. It’s really wreaking havoc on online courses in particular though, which is unfortunate because although I have my criticisms of them, they’re a big boon to working adults who want to further their education or change careers.
designing tests where the llm will always get it wrong would be a good lesson about not trusting the things
Is this testing whether I’m a replicant or a lesbian?
perhaps both
I just had a horrible thought. Soon there could be…
SocratesAI • Listen your way to knowledge!™
AI Tools Built for Teaching And Learning. Socrat Helps Teachers and Students Use AI Effectively.

Socratestees
So-crat
So-crates
Please step away from the lathe, I beg you

SocrAItes
Ack!
we already have audiobooks
No prompts.
Looking forward to the coming retraction because it turns out your interview coding was nondeterministic and your results are not reproducible.
…somebody’s out there trying to see if research is reproducible, right?

…papers will get pulled from LLM training sets when they get retracted, right?

…there isn’t a massive number of social sciences papers already published that are basically useless because their results aren’t meaningful outside of a narrow set of subjectively specified predictor variables, right?

Also holy hell, is this what a vibe-coded website looks like? https://www.shrutimishra.co/
Hey Claude, make me a terrible website
Try Claude; it’s like turning on turbo mode! Tragically and mysteriously, only for people who are totally blind to the quality of what Claude produces
i think this is probably the best bit possible when scrolling
EAT DA TEXTThe overlap sorts itself out eventually school of web design
«taps the sign» “If the program they’re selling to make tons of money actually made tons of money, they’d be using it to make tons of money instead of selling it.”
“I’m selling the secret of asking a computer to make the thing for me in five minutes for $2000. Yes this is a sustainable business model.”
Love the alignment of this list & the gradual decrescendo of the font size

Claude, make the font a neo-update of Comic Sans.
The font is definitely my favorite thing.
I wonder what the actual prompt was. It would be great if “reverse engineering” AI stuff was possible.
somebody’s out there trying to see if research is reproducible, right?
Claude says it looks reproducible. Claude, write a paper confirming…
I don’t give a shit if it’s qualitative. If its data you need directly recorded please don’t use the hallucination chat service.
Tech bros (and all those who repeat their talking points) are dangerous people and should be treated as such
Sociology students and cheating
Fork found in kitchen
LetterLikian Jihad against the thinking machines and its pathetic acolytes.
Agreed except that this implies LLMs can actually think which is ceding too much ground.
I think in Dune’s Butlerian Jihad they considered anything that “thought” on the level of an electronic calculator a thinking machine. An abacus might be alright, but we have Mentats for that.
Dune’s Butlerian Jihad
I’ve seen “Butlerian jihad” used so many times on this site, and never knew it was a Dune reference. I always thought it was some inside joke I didn’t get which referenced feminist theorist Judith Butler, in the sense of “we need the Holy War for feminism”
I bet that was a little confusing in some contexts! But yeah, that’s how Dune manages to be set far in our future and yet computers don’t exist. They apparently used to and then all of humanity decided that was Very Bad and destroyed them all, in a conflict called The Butlerian Jihad (I don’t think we ever learn where the name comes from). And now mentats do the work that computers used to
It’s named after Serena Butler. The woman who started it. This is extended Duniverse though. Not in Frank’s books
Gotcha! I’ve only ever read the original 5(?) books.
6 original books.
Dune, Messiah, Children, God Emperor, Heretics and Chapterhouse

The 9 prompts are just 9 videos of me loudly farting into a jar.
Sorry.
Honestly less vulgar than what actually happened

Well that’s unfortunate lmao




























