I think this one’s getting downvoted by people who haven’t read the article. The argument proceeds that because llms respond like people with anxiety, depression, and ptsd, and because people with those conditions interact with llms, the llms are likely to intensify or exacerbate the symptoms in the humans that interact with them. The researchers weren’t trying to fix the llms through therapy.
Clickbaity title.
I think people object to articles anthropomorphicizing LLM’s and Generative AI.
People here are less likely to read articles where the headline does so.
This is so fucking dumb. All this is saying is that the researchers do not understand what LLMs actually are - that is, that they’re essentially just a bunch of markov chains layered on top of each other. They are not sentient or sapient.
Stop fucking anthropomorphizing LLMs
Have you considered the possibility that the kinds of researchers who publish in nature may have taken the time to do some basic research into how llms work before commissioning a study, and that may not be what’s happening here?
Who cares!? Commenting on Lemmy is about expressing IMPOTENT NERD RAGE! It has nothing to do with truth or facts. I simply want to be ANGRY and yell in all caps about SOMETHING at least twice a day! And I will do so without even reading the article!
SO SHUT UP AND LET ME RAGE!!!
.
– Original commenter, probably
deleted by creator


