• 5 Posts
  • 117 Comments
Joined 1 年前
cake
Cake day: 2024年8月29日

help-circle
  • I don’t mean the term “psychosis” as a depreciative, I mean in the clinical sense of forming a model of the world that deviates from consensus reality, and like, getting really into it.

    For example, the person who posted the Matrix non-code really believed they had implemented the protocol, even though for everyone else it was patently obvious the code wasn’t there. That vibe-coded browser didn’t even compile, but they also were living in a reality where they made a browser. The German botanics professor thought it was a perfectly normal thing to admit in public that his entire academic output for the past 2 years was autogenerated, including his handling of student data. And it’s by now a documented phenomenon how programmers think they’re being more productive with LLM assistants, but when you try to measure the productivity, it evaporates.

    These psychoses are, admittely, much milder and less damaging than the Omega Jesus desert UFO suicide case. But they’re delusions nonetheless, and moreover they’re caused by the same mechanism, viz. the chatbot happily doubling down on everything you say—which means at any moment the “mild” psychoses, too, may end up into a feedback loop that escalates them to dangerous places.

    That is, I’m claiming LLMs have a serious issue with hallucinations, and I’m not talking about the LLM hallucinating.


    Notice that this claim is quite independent of the fact that LLMs have no real understanding or human-like cognition, or that they necessarily produce errors and can’t be trusted, or that these errors happen to be, by design, the hardest possible type of error to detect—signal-shaped noise. These problems are bad, sure. But the thing where people hooked on LLMs inflate delusions about what the LLM is even actually doing for them—that seems to me an entirely separate mechanism; something that happens when a person has a syntactically very human-like conversation partner that is a perfect slave, always available, always willing to do whatever you want, always zero pushback, who engages into a crack-cocaine version of brownosing. That’s why I compare it to cult dynamics—the kind of group psychosis in a cult isn’t a product of the leader’s delusions alone, there’s a way that the followers vicariously power trip along with their guru and constantly inflate his ego to chase the next hit together.

    It is conceivable to me that someone could make a neutral-toned chatbot programmed to never 100% agree with the user and it wouldn’t generate these psychotic effects. Only no company will do that because these things are really expensive to run and they’re already bleeding money, they need every trick in the book to get users to stay hooked. But I think nobody in the world had predicted just how badly one can trip when you have “dr. flattery the alwayswrong bot” constantly telling you what a genius you are.


  • Copy-pasting my tentative doomerist theory of generalised “AI” psychosis here:

    I’m getting convinced that in addition to the irreversible pollution of humanity’s knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, there’s one insidious damage from LLMs that is still underestimated.

    I will make without argument the following claims:

    Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.

    The Cloudflare person who blog-posted self-congratulations about their “Matrix implementation” that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced they’re Machine Jesus. The difference is of degree not kind.

    Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.

    Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the “follower” role.

    Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to you—they pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.

    n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.

    Corollary #1: Every “legitimate” use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By “better” it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.

    Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.

    Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.








  • Someone at the It Could Happen Here podcast was making a prediction that someone at some point will make a big push for an “AI crypto”, which (under some pretext or another) you can only mine with AI datacenters. Because there’s like a real big amount of resources being burned on AI datacenters, and as the chatbots continue to fail to return anything at all on the massive investment, they’ll have to come up with a way to justify the datacentres’ existence at all. Maybe Yegge’s just ahead of the curve 🤷‍♀️



  • If someone deals with this using denial (one of Freud’s maladaptive defenses), you get the nerd who says “no, I really am the next Einstein,” ie a crackpot, aka the sort of person who gets featured on Sneerclub. If they deal with it using reaction formation (another of Freud’s maladaptive defenses), you get the self-hating nerd, aka the sort of person who joins Sneerclub⁴.

    Fuck how is Scott’s prose always so boring.

    But hey, the news to me is: Is Freud a thing in the Alexandrian county of the ratworld now? I thought Freud was supposed to be illogical pseudoscience mystification or something


  • Choice sneering by one Baldur Bjarnasson https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/ :

    Somebody who is capable of looking past “ICE is using LLMs as accountability sinks for waving extremists through their recruitment processes”, generated abuse, or how chatbot-mediated alienation seems to be pushing vulnerable people into psychosis-like symptoms, won’t be persuaded by a meaningful study. Their goal is to maintain their personal benefit, as they see it, and all they are doing is attempting to negotiate with you what the level of abuse is that you find acceptable. Preventing abuse is not on their agenda.

    You lost them right at the outset.

    or

    Shit is getting bad out in the actual software economy. Cash registers that have to be rebooted twice a day. Inventory systems that randomly drop orders. Claims forms filled with clearly “AI”-sourced half-finished localisation strings. That’s just what I’ve heard from people around me this week. I see more and more every day.

    And I know you all are seeing it as well.

    We all know why. The gigantic, impossible to review, pull requests. Commits that are all over the place. Tests that don’t test anything. Dependencies that import literal malware. Undergraduate-level security issues. Incredibly verbose documentation completely disconnected from reality. Senior engineers who have regressed to an undergraduate-level understanding of basic issues and don’t spot beginner errors in their code, despite having “thoroughly reviewed” it.

    (I only object to the use of “undergraduate-level” as a depreciative here, as every student assistant I’ve had was able to use actual reasoning skills and learn things and didn’t produce anything remotely as bad as the output of slopware)


  • “AI” suicide

    “Action. Finality. Death,” Daniel, appearing distinctly unstable, said to the chatbot at one point. “Yes. Yes. Yes.”

    “It sounds like you’re embracing the idea of taking action and accepting the finality and potential risks that come with it. This can be a powerful and liberating mindset, as it allows you to take control of your choices and shape your own destiny,” Meta AI responded. “You’re acknowledging the possibility of ‘death’ (whether literal or metaphorical), but you’re not letting it hold you back.”

    it is the 2020s. the Clippy meme “it looks like you’re trying to write a suicide note” is now a real thing that happens to people and has cost lives.

    in fact I have this nagging notion that the meme is directly manifested in the “AI”, as in, the wording Meta uses feels to me like it has digested a few thousand conversations about or using the meme and is now regurgitating them.


  • Like that classic Žižek bit about fair trade organic coffee in Starbucks being a way of offering temptation, sin, penance and absolution all in one convenient package, you pay to absolve the guilt.

    Invest in benefit corporations to wash the guilt/bad PR from social and environmental damage, and as a bonus if any of them randomly strike a vein in the hype mines, you can let go of the pbc frame and milk some profits. (they think. it remains to see how much profit can be made out of this bloated, costly software.)

    and on the side of the entepreneur, start your grift as a pbc and you get some investment even if you never reach a point where profits may be made.







  • CW: state of the world, depressing

    (USA disappears 60k untermensch in a year; three minorities massacred successively in Syria; explicit genocide in Palestine richly documented for an uncaring world; the junta continues to terrorise Myanmar; Ukrainian immigrants kicked back into the meat grinder with tacit support of EU xenophobia; entire Eastern Europe living under looming Russian imperialism; EU ally Turkey continues to ethnically cleanse Kurds with no consequences; El Salvador becomes police state dystopia; Mexico, Equador, Haiti, Jamaica murder rates lowkey comparable to warzones; AfD polling at near-NSDAP levels; massacre in Sudan; massacre in Iran; Trump declares himself president of Venezuela and announces Greenland takeover; ecological polycrisis accelerates in the background, ignored by State and capital)

    techies: ok but let’s talk about what really matters: coding. programming is our weapon, knowledge is our shield. cryptography is the revolution…