grading is easily solved
There are reasonable solutions to this, as mentioned in the article:
“Professors said they resorted to oral interrogations, handwritten notebooks and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI.”
The grading is quite solvable. Even for essay writing. You can put them in a room with disconnected PCs. Make the room a Faraday cage, if needed. Submitting papers that were supposedly “written” at home is a convenience of the past – something that must be dispensed with, at least as far as grading goes.
Grading can be scrapped as well, anyway. Why grade? The only good reason for a school to grade someone is so the student knows for themselves how they are doing. An A, B, C, etc is not great. Students should get granular feedback on every answer they gave. The student knows themself whether or not they cheated.
Grades that are used for external purposes like showing your GPA to a prospective employer – that’s a misuse of grading. Employers are increasingly fools to trust grades as an indicator of fitness for the job. Employers should do their own appraisal. That’s their job, not the school’s job. Some profs have been very outspoken about this long before AI hit hard. Now their stance has just become even more important.
So grading is not a problem for the school. Appraisal is the employer’s problem.
Grants and scholarships need to get creative
The only real problem for grading is where it’s used in a competition for grants and scholarships. The competition can no longer be based on essay submissions from the outside. It has to be done in a controlled environment where applicants appear in person and do something oral or write on a spontaneous topic under observation.
I agree with the core of the text; the problem would exist even if you were to rely on another person to think for you, because you are not thinking by yourself. That’s kind of obvious.
But there’s a second potential problem the article doesn’t address, and I think it’s considerably more pressing: those large models lower your standards on what’s rational enough to be acceptable.
~Half year ago, I saw in Hacker News a comment highlighting large models don’t do maths correctly; they asked some models to multiply two large natural numbers, and all answers were incorrect. A lot of people replied that comment with skibidi, TL;DR “y u asking ChatGPT model to multiply? lol. grab calculator lmaaaaooo it thinx it doesn’t do maths haha”, missing the point completely. (C’mon, it’s HN, you know.)
I repeated this test. And shared the results here, in the Fediverse. Here they are:



All models outputted wrong answers, just like in the HN test. And yet at least one other user defended the models, through bullshit like “it’s close to the real result! This shows the model is smart!”.
But wait a minute. Multiplication is a deterministic procedure, right? Start with exact input, follow the steps of the procedure correctly, and you’ll get exact and correct output, every single time, no matter if the input contains factors with 2 or 200 digits. This means multiplication is also a damn good test for the ability to follow logic reasoning. (Or to output something that humans would interpret as such.)
And yet, I saw two instances of people giving that incorrect output a pass. They didn’t defend it because of something like “those models don’t think” (true); no, they did it because the reasoning in the output is “good enough”. Even if a 10yo is supposed to show better reasoning than that.
And it isn’t just multiplication. This lack of reasoning is evident for everything you ask from a bot. Or from the fact they can’t understand a negation (and oopsie, the “agent” suddenly deleted your files). But you’re supposed to give it an OK sign to be an irrational agent. And in the end you give yourself a free pass to be also an irrational.
[Worth noting that those examples are anecdotal, though, and they back up my conclusion, so you do need to take my conclusion with a grain of salt. I don’t think the conclusion is incorrect, though. If anyone has literature on the topic I’d love to see it.]

Saying “professors scramble to save critical thinking in an age of AI” when Critical Thinking is not, at fucking all, taught in American schools is disconcerting.
Critical Thinking is not, at fucking all, taught in American schools is disconcerting.
Professors don’t teach at secondary schools.
They are speaking at the higher education level. But I agree with the sentiment that it should be taught throughout k-12.
Oh there was a plan to introduce it in high school and conservatives threw a fit because it would turn their kids against their shitty beliefs. It was scrapped and that was the last I heard about it.
Critical thinking should be taught in kindergarten and by second grade children should be able to use evidence to generate and defend basic beliefs.
As someone collaborating with faculty to produce higher education courses, even the professors are falling to the temptation of using AI to develop their curriculum and learning materials. And the results are across the spectrum from carefully considered to slop.
the professions are falling to the temptation of using AI to develop their curriculum and learning materials.
Yes, shit Profs are putting slop in course materials and they should be fired for doing so under basic rules of academic integrity.
AI is a pretty great assistant if you already know how to do the thing it’s helping you with.
But it’s also great at looking like it’s coherent from a quick glance while being full of nonsense.
it feels unfair, that they can churn out what would take several hours to review in a couple minutes….
but then if you consider how expensive it is to humanity at large, it’s not cheap at all….
i think the punishment for slop should be greater than entirely plagiarizing something.
———————————-this comment was generated by a human
Critical thinking was a problem long before AI. How do you think Trump got elected?
Was anybody doing any of that anyway?






