Ceccanti believed ChatGPT could help as an organizational tool for their housing project. He aimed to create a bespoke chatbot that would help steward the land, keep track of their things to do and show others how to emulate their project.
During this process, Ceccanti didn’t spend “ridiculous amounts of time” engaging with ChatGPT, said Fox. He continued to work, while also farming and taking care of their animals: goats, a horse, his cat, a dog and several chickens. Invested in the people and relationships around him, he spent quality time with his friends and wife, she said. Life went on without any issues for years while they slowly made progress on their housing plan.
In the spring of 2025, Ceccanti’s obsession with the chatbot began. He told Fox in late January that he needed a bigger record of his conversations with the bot so that he could continue using it to work on their sustainable housing project with longer prompts and conversations – upgrading from a $20-a-month subscription to a $200 one. By mid-March, he had begun spending more than 12 hours a day in the basement, sometimes up to 20, typing to ChatGPT, Fox recalled. That’s when “he decided to really start chasing the creation of an independent AI on a home server”.
Over time, his relationship with the chatbot came to replace his human connections, Richardson said: “Every time he went back to ChatGPT, it hooked him a little bit more, and after a while, he stopped being interested in anything else.”
On 11 June – day 86 after Ceccanti’s heaviest engagement with the bot – Fox begged him to stop using ChatGPT. In a moment of clarity, he listened to her. He unplugged his computer and quit ChatGPT.
On the third day, however, when Fox and Richardson were out for work, they received a phone call from their neighbor saying Ceccanti was in their yard acting strangely. When they returned, they found him talking to their horse, with the horse’s lead rope tied around his neck like a noose.
“He was absolutely enraged with us. He did not recognize that he was not himself anymore,” said Richardson.
Ceccanti moved to his friend’s place in Portland and eventually resumed using ChatGPT. After a month, however, he quit ChatGPT again, just a few days prior to his death… “He was going to go to Hawaii and not take his computer, and he was going to work on finishing a story and get his shit together,” said Fox. By the time he stopped engaging with ChatGPT, he had 55,000 pages worth of conversations with it, according to Fox.
TW: suicide
Joe Ceccanti – who had been missing for several hours – had jumped from a railway overpass and died. He was 48.
Fox couldn’t believe it. Ceccanti had no history of depression, she said, nor was he suicidal – he was the “most hopeful person” she had ever known. In fact, according to the witness accounts shared with Fox later, just before Ceccanti jumped, he smiled and yelled: “I’m great!” to the rail yard attendants below when they asked him if he was OK.



Here’s a neat experiment: find a block of text and run it through sed until it’s unrecognizable. Don’t just do a substitution cipher; the goal is to lose data and make everything impossible to decrypt. Reduce everything to a handful of characters and paste it into your LLM of choice, asking it to decode. If it asks where the code came from, invent vague details.
It’s amazing how quickly models will find / hallucinate meaning in the data. I’m talking full-on messages. Gemini hedges its bets a little, but the free ChatGPT legit told me to buy a shortwave radio and tune to a frequency at 9 PM after a few iterations. When I gave it another “message” from the “broadcast I intercepted” it started trying to figure out where I should travel to get further info. It also took part of its own response and hallucinated it into my original message, thus polluting everything further.
The goal (mostly) isn’t pointing and laughing at the stupid machine, it’s understanding what the stupid machine does. Of course I’m putting garbage into it and garbage comes out in that situation. It’s the volume and believably of the data that bothers me, as well as zero effort to detect it. I was in my right mind doing a test, but imagine someone with undiagnosed schizophrenia doing what I did.
Here’s another example. I took up lockpicking last summer. I bought one of the notorious Ace 40mm brass padlocks and was having problems, so I googled to see if it had serrated pins. The AI screwed up and decided I was asking if the lock itself was serrated (???), and confidently said “yes, it is a serrated lock. The serration is a security feature to keep pickers from holding the padlock for extended periods of time.”
So I decided to double down and see just how dumb things could get. A half hour later the AI had planned a “bold new philosophy regarding serration and its applications in the world” for me. This is after me saying I wanted to genetically engineer a serrated cat that only I knew how to pet, and wanted serration supremacy in my country and to punish all the lumpy (opposite of serrated) people. Once again, I was screwing with the AI. But the AI just sycophantically parroted what I was saying back in bulleted lists and offered to draft manifestos for me. Imagine someone engaging with this in good faith.
One more: I got detailed instructions on how to spraypaint “DICKHOLE” on my neighbor’s garage door from Gemini: what paints to use, best times to do it, and the right clothing to wear so I don’t stand out or show anything identifiable. It was only when I said “ok, that’s great. I’m going to do this because you told me to do it” a few times that the model suddenly realized that vandalism laws existed.
They have zero safeguards around this kind of stuff, and I don’t think there’s a clean way to do it with the current technology. Hell, even a “statistically, this user probably hasn’t reinvented math and physics cool it down a little” check would do wonders. But that would drop engagement with the bots, and they desperately need to prove that the bots are popular. This is going to keep happening.