Following the proliferation of the nonconsensual sexual deepfakes on X, the platform has detailed changes to the Grok account’s ability to edit images of real people. They match the changes reported on Tuesday by The Telegraph, as Grok’s responses to prompts like “put her in a bikini” became censored.
But in tests of the feature on Wednesday, we found that it was still relatively easy to get Grok to generate revealing deepfakes, while X and xAI owner Elon Musk blamed the problems on “user requests” and “times when adversarial hacking of Grok prompts does something unexpected.” As of Wednesday evening, despite the policy’s claims, our reporters were still able to use the Grok app to generate revealing images of a person in a bikini using a free account.
while X and xAI owner Elon Musk blamed the problems on “user requests” and “times when adversarial hacking of Grok prompts does something unexpected.”
Hey guys! It’s not our fault our AI is making deep fake nudes of real people, it’s our registered paying users asking our AI to make deep fake nudes who are at fault!
Pornography doesn’t pervert perverts, perverts pervert pornography.
How are these people not being prosecuted? Or am I missing something here?
Somehow I think they don’t actually want to solve this problem, so are doing the bare minimum to make it look like they’re trying.
Meanwhile ChatGPT won’t even say Trump is bad.
The question is, who is asking it to do this shit? It’s not like it has free will.
Where did you think the people from 4chan went after it folded?
They were always around and still are.
Terrible people, which is why it should not have the ability to do this no matter what prompts they use.
And Telegraph reporters, apparently.
The Telegraph is extremely right wing, so that would reduce to just terrible people.
Just the types of people who are still on Twitter. You know, absolute degenerates.
Twitter ghouls.




