

Honest question, since I’m not on linkedin (and kinda looking for a new job): does it really help anyone find a job? It has been my impression from the outside that it’s mostly empty drivel.
sometimes a dragon
he/they, queer, furry, ζ, vegan


Honest question, since I’m not on linkedin (and kinda looking for a new job): does it really help anyone find a job? It has been my impression from the outside that it’s mostly empty drivel.


I’m confused that anyone thinks that the world needs another linkedin…


Wow. The mental contortion required to come up with that idea is too much for me to think of a sneer.


When all the worst things come together: ransomware probably vibe-coded, discards private key, data never recoverable
During execution, the malware regenerates a new RSA key pair locally, uses the newly generated key material for encryption, and then discards the private key.
Halcyon assesses with moderate confidence that the developers may have used AI-assisted tooling, which could have contributed to this implementation error.


Claim 1: Every regular LLM user is undergoing “AI psychosis”. Every single one of them, no exceptions.
I wouldn’t go as far as using the “AI psychosis” term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.
I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I don’t believe that’s a weakness of human psychology, but that it’s what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, I’m sure it’s not good. Time for me to be a total dork and cite an anime quote on human interaction: “I create them as they create me” – except that with LLMs, it actually goes only in one direction… the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.
while atrophying empathy
This possibility is to me actually the scariest part of your post.


Have a quick recovery! It sucks that society has collectively given up on trying to mitigate its spread.


Are you trying to say that you are not regularly thinking about the meta level of evidence convergence procedures?


The AI craze might end up killing graphics card makers:
Zotac SK’s message: “(this) current situation threatens the very existence of (add-in-board partners) AIBs and distributors.”
The current situation is so serious that it is worrisome for the future existence of graphics card manufacturers and distributors. They announced that memory supply will not be sufficient and that GPU supply will also be reduced.
Curiously, Zotac Korea has included lowly GeForce RTX 5060 SKUs in its short list of upcoming “staggering” price increases.
I wonder if the AI companies realize how many people will be really pissed off at them when so many tech-related things become expensive or even unavailable, and everyone will know that it’s only because of useless AI data centers?


Bitcoin jesus
I thought you were making a sneer, but then it’s an actual name


If you watch the video, your brain will leak out your ears. But we watched it so you don’t have to.
Thank you for the invaluable service.


A while ago I wanted to make a doctor appointment, so I called them and was greeted by a voice announcing itself as “Aaron”, an AI assistant, and that I should tell it what I want. Oh, and it mentioned some URL for their privacy policy. I didn’t say a word and hung up and called a different doctor, where luckily I was greeted by a human.
I’m a bit horrified that this might spread and in the future I’d have to tell medical details to LLMs to get appointments at all.


please hate this article headline with me
I’m right there with you


Nice rant in the entry on OSNews about this… love the phrase “MLMs for unimpressive white males”.


I really enjoy the bingo card. Let’s see when I can find an opportunity to use it…


“K9: scom”, just need to figure out what scom is.
But my favorite part is in the upper left, the brain that just says “Autism” and “Autism” around it.


Next step: “I’m sick of having to press the ‘new prompt’ button, why doesn’t …”


I was only describing a specific kind of person, certainly wasn’t trying to imply that C/C++ devs generally have no care for security!


Also I wonder what his beef with Rust is? Is Rust woke?
I’ve seen this before. There is a special type of person out there who feels emasculated (yeah it’s always men, isn’t it) by the idea of a language statically enforcing memory-safety. Because, you know, real men write C or C++ with no safety rope and no seatbelt, juggling with raw pointers and chainsaws with their bare hands. They think that the only reason why C/C++ has produced an infinite abundance of bugs and security holes is because other programmers just suck. But they are different, they can handle it, they are very clever after all. And they won’t let some wusses take away their powers with all these ideas of memory safety by design.


I keep wondering: is this really a need which many people fundamentally have, or is AI usage doing something to their brains?
Thanks everyone for the replies <3 Guess I should make an account there after all… bleeeh :/