They are keeping your attention off of Epstein.
Keep playing his game, he’ll keep evading consequences
Musk, and his company are doing it. Grok is a machine.
Machines can’t be held responsible, Musk can (If anybody had some balls)
Late on Saturday night, Solomon had an addendum to his condemnation of the sexual violence of the app: “Contrary to media reports, Canada is not considering a ban of X.”
This government is so useless 🤦♂️
I may have to create a Xitter account just to post porn of Solomon, sounds like they’d approve.
I support this initiative. It might even motivate him to do something besides sitting on his hands.
We should have put a ban on X the minute that moron bought it.
What qualifications does Solomon have? Is he not Carney’s secret arts dealer?
The American government is enabling him. Fraternity of male billionaires.
Banning CHILD PORN CREATION TOOLS will be seen as an Act Of WAR against the United States (HOME of Pizzagate!)!
You want the women and children abusers to take action against one of their own to regulate a tool used for abusing women and children?
“Grok” isn’t doing shit, it’s a computer program.
People are using Grok for fucked up shit.
But yes, it needs to be reeled in or banned.
This has gun control issues vibes.
And we have regulations on guns.
We have zero regulations on AI. We need some.
There are plenty of other models out there that prevent you from creating sexually explicit content, idk of any guns that are made not to shoot people. To be clear I also think grok should be regulated.
Hot glue guns.
Canada? Your government can’t do much but block Twitter and its related apps, and maybe that’s what Carney should do. I’ve never heard of him being associated with any of the bad stuff your neighbours down south are in their government.
I love what I’ve seen from Canada in response to the madness in the US, so keep being awesome in that way. Encourage your neighbours to use Bluesky or Mastodon in place of Twitter, and if they have to use an AI chatbot, the one on DuckDuckGo (duck.ai) is supposed to be private, as is the one on Proton (Luna or Luma or something like that). I personally don’t use them, but they are useful in search results with finding stuff from time to time.
Why is it not abusing men?
I have a question… does grok not do deelfakes of men as well? Or is that not an issue for some reason. Haven’t been paying attention, just seen headlines.
No, this whole story is a mix of hyperbole and not understanding AI.
Imagine there was a shitty clothing company that produced a lightweight fabric. Due to piss-poor testing and not actually giving a shit about their customers, it turns out that after a few times in the wash, the clothing essentially disintegrates as you wear it.
The story breaks, but for some reason, all the headlines all say “Company sells clothing which falls off when little kids wear them, leaving them naked!” While technically true, anyone who spends more than 5 seconds looking into it will recognize how much that headline twists the actual situation. But they print it anyways because people already hate the company and are eager to accept anything negative at face value. Plus, accusing a person or group of child exploitation is a time-honored strategy of criticism because not many people will push back against it as they don’t want to be seen as defending child exploitation, even though they’re really just pointing out the truth.
Well, grok is capable of producing csam with a straightforward text prompt right? This would seem to me to be illegal on x’s part but I could be mistaken.
TL;DR at the bottom.
It’s a bit more complex than that, it’s not a straightforward text prompt as they did attempt to have filters to prevent stuff like this. However, this being a Musk company, those filters are shitty and people quickly found ways to bypass it, likely through a series of prompts or very highly tailored prompts.
But thats just the nature of AI. AI generators are never specifically trained using CSAM (at least I really fucking hope not). But neither are they specifically trained to generate giraffes made out of dumplings dancing on the concept of time. However, if you ask it to make the latter, it will dutifully spit out some slop that matches. The point is, AI image generators can make ANYTHING, or at least try to. That’s what they do. You can build filters and put in restrictions to try to prevent users from asking it to make certain things, or prevent those things from getting delivered, but the actual ability for the AI to make those things is still there. And due to the black box nature of machine learning, it can never actually be removed.
Now, there is a VERY big argument to be made against AI as a whole for that reason. If you spend a little while thinking about what it actually means to have something with the ability to create ANYTHING, or at least an approximation of it, you should be scared shitless. The only real safeguards are creating filters on either the input or the output side, but filters can be worked around. You could see it with early versions of things like ChatGPT, where you could create a carefully worded prompt to have it create a duplicate version of itself with the filters removed and return a secondary response from that duplicated instance, leading to it replying to normally off-limit topics (like building explosives or committing suicide) with a generic “I’m sorry Dave, I’m afraid I can’t do that.”, followed by another response that gives the full, unredacted answer. Because it always has the ability to create these things, it’s just company created filters which stop it from showing them.
Anyways, this comment has gotten away from me. The point is, it’s not really about Grok. It’s not really about CSAM. It’s about AI as a whole, but that’s too big and abstract of a concept for the masses to grasp. So instead we get articles and legislation specifically dealing with one particular issue from one particular program because that’s just the first thing people have become outraged at, without seeing the big picture.
TL;DR: No, it’s not as simple as a straightforward prompt, and it’s far from just Grok that is at issue.
I understand that it’s a general purpose machine for producing images given prompt/context. I don’t feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but… seems like grok doesn’t have good enough safeguards?
Capitalism: you’re saying a pedo’s money is no good here?
Removed by mod
As we now live in the era of actual fascists, it’s inadvisable to throw around the world as a term of disparagement. The Carney Liberals are conservative but they are not fascist.
Educate yourself fool. Canada voted in Mussolini.
“Fascism should more properly be called corporatism because it is the merger of state and corporate power.” - Benito Mussolini
OK edgelord.
Lol, give your balls a tug there buddy. Get off the internet and talk to some humans.
Removed by mod
Well if it rhymes it must be true! You’re gonna feel so embarrassed looking back at this phase when you’re all grown up.



