I have realized a lot of posts on here mostly criticizing the data collection of people to train A.I, but I don’t think A.I upon itself is bad, because A.I- like software development- has many ways of implementations: Software can either control the user, or the user can control the software, and also like software development, some software might be for negative purposes while others may be for better purposes, so saying “Fuck Software” just because of software that controls the user feels pretty unfair, and I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet. Now, I’m not trying to start a big ass debate on how A.I = Good, because as mentioned before, I believe that A.I is as good as its uses are. All I want to know from this post is why you hate A.I as a general topic. I’m currently writing a research paper on this topic, so I would like some opinion.
AI under capitalism is dangerous, much more under fascism. In a solarpunk future it would be fine, just another cool piece of tech.
I hate all of this Generative AI trash.
AI has been a concept in one way or another for a long time. The idea of ai is fine, this current state of slop machines and chat bots being pushed can suck my ass.
I like using chat gpt for npc dialogue in my DnD game. It can really fill out a character. Otherwise I don’t use it.
Yes, I hate what’s commonly being termed in the current era as AI. It’s mediocrity and lies as a service.
First of all, that which is to get fucked is Generative AI in particular. Meaning, LLM text generation / diffusion model image generation, etc. AI which consciously thinks is still sci-fi and may always be. Older ML stuff also called “AI” that finds patterns in large amounts of satellite data or lets a robot figure out how to walk on bumpy ground or whatever is generally fine.
But generative AI is just bad and cannot be made good, for so many reasons. The “hallucination” is not a bug that will be fixed; it’s a fundamental flaw in how it works.
It’s not the worst thing, though. The worst thing is that, whether it’s making images or text, it’s just going to make the most expected thing for any given prompt. Not the same thing every time- but the variation is all going to be random variations of combining the same elements, and the more you make for a single prompt, the more you will see how interchangeably samey the results all are. It’s not the kind of variation you see by giving a class of art students the same assignment, it’s the variation you get by giving Minecraft a different world seed.
So all the samey and expected stuff in the training data (which is all of the writing and art in human history that its creators could get their hands on) gets reinforced and amplified, and all the unique and quirky and surprising stuff gets ironed out and vanishes. That’s how it reinforces biases and stereotypes- not just because it is trained on the internet, but again it’s because of a fundamental flaw in how it works. Even if it was perfected, using the same technology, it would still have this problem.
When most people talk about “hating AI” they’re talking the AI that is this wave before the next winter: (de)generative AI (whether based on LLM or diffusion or whatever ever other tripe drives things like GPT, DALL-E, Jukebox, etc.).
And yes, I hate AI in that sense, in that it is a dead end that is currently burning up the planet to produce subpar everything (words, images, music) while threatening the very foundation of cultural knowledge with obliteration.
AI in a broader sense, I don’t hate. Even the earlier over-hyped-before-wintered AI technologies have found niche applications where they’re useful, and once the grifters leave the (de)generative AI field we may find some use cases for AI there as well. (I think LLMs have a future, for example, in the field of translation: I’ve been experimenting with that domain and once the techbrodude know-it-all personality is excised from the LLMs and the phrase “I don’t know” is actually incorporated properly I think it could be very valuable there. You still have to look out for hallucinations, though.)
But (de)generative AI in general is overhyped shit. And it’s overhyped shit that cannot be meaningfully improved (indeed latter-day models turn out to be worse than earlier ones: ChatGPT4’s suite is more prone to hallucination, for example, than ChatGPT3.5). So a whole lot of people are getting pressured, a whole lot of lives are being ruined, a whole lot of misinformation and active disinformation is being spewed by them … but hey, at least we can have shit writing, shit art, and shit music!
I know A.I might be used for replacing jobs, but that has happened many times before, and it is mostly a positive move forward like with the internet.
This is an excuse used many times but it doesn’t stand to inspection. Let’s go with robots making cars. When the auto industry had massive layoffs in the '80s the median age for factory workers assembling cars was about the early '30s. What proportion of people in their '30s make any kind of transition to stable, well-paid careers when they’re rendered redundant? (Hint: not very many.) An entire generation of the rust belt, in effect, because of automation, were shoved into poverty THAT WE STILL SEE TO THIS DAY. And that’s one sector. Automation shit-canned a whole lot of sectors and the reverberations of that have echoed throughout my entire life. (Born in the '60s.)
The only “positive move forward” seen by these traumatically devastating technologies released willy-nilly into society with no mitigation plan is that rich fuckers get richer. Because, you know, Sam Altman needs more cash and not a punch to his oh-so-punchable face.
If we take the forum title here, the “fuck” is directed at the people in charge of so-called “AI” companies. The technology has value. It’s just being forcefed down our throats in ways that remind us of block chain and whatever happened to block chain?!
The tech with the most push behind it is being pushed in infancy, and is damn near useless without datasets of entirely stolen content.
There’s genuinely useful things and impressive tech under the machine learning umbrella. This “AI” boom is just hard pushing garbage.
This past week we saw the most obvious example yet of why they’re pushing LLMs so far too, Grok’s unprompted white supremacist ramblings over on twitter. These tools can easily be injected with biases like that (and much more subtly too) to turn them into a giant propaganda machine.
Some of the tech is genuinely useful and impressive, but the stuff getting the biggest pushes is nothing but garbage, and the companies behind it all are vile.
These tools can easily be injected with biases like [Grok’s unprompted white supremacist ramblings] (and much more subtly too) to turn them into a giant propaganda machine.
It’s fortunate that Kaptain Ketamine had his little binge of his favourite drug and made it SO OBVIOUS. There’s subtle biases all over degenerative AI. Like there was a phase when trying out the “art” creators where I couldn’t get any of them to portray someone writing with their left hand. (I don’t know if they still have a problem with that; I got bored with AI “art” once I saw its limitations.) And if the word “thug” was in the prompt it was about 80% chance of being a black guy. Or if the word “professional” was in the prompt it was about 80% chance of being a white guy. EXCEPT if “marketing” was added (as in “marketing professional”). Then for some reason it was almost always an Asian woman.
Or we can look at Perplexity, supposedly driven by not only its model, but incorporation of search results into the prompt. Ask it a question about any big techbrodude AI and its first responses will be positive and singing the praises of the AI renaissance. If you push (not even very hard) you can start getting it to confess to the flaws of LLMs, diffusion models, etc. and to the flaws of the corporate manoeuvring around pushing AI into everything, but the FIRST response (and the one people most likely stop reading after) is always pushing the glory of the AI revolution.
(Kind of like Chinese propaganda, really. You can get Party officials to admit to errors of judgment and outright vile acts of the past in conversation, but their first answer is always the glory of the Party!)
Oh, and then let’s look at what’s on the Internet where most of the data gets sucked up from. There’s probably about three orders of magnitude more text about Sonic the Hedgehog in your average LLM’s model than there is about, oh, I don’t know, off the top of my head, Daoism, literally the most influential philosophical school of the world’s most populous country! Hell, there’s probably more information about Mario and Luigi from Nintendo than there is about the Bible, arguably the most widespread and influential book around the world!
I wonder how that skews the bias…?
Well scammers destroyed its reputation and governments refused to use the tech BC it would expose corruption.
Make no mistake when the next reshuffle happens, it will the bedrock of all of systems esp government and finance.
People in power are not interested in such transparency currently
I don’t hate AI as much as I hate the nonexistent ethics surrounding LLM’s and generative AI tools right now (which is what a lot of people refer to as “AI” at present).
I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better” and is more affordable. People use LLMs as a means to formulate opinions and use as their therapist, but when they encounter real life conversations that have ups and downs they don’t know what to do because they’re so used to the ultra-positive formulated responses from chatGPT. People use AI to generate work that isn’t their own. I’ve had someone already take my own, genuine written work, copy/paste it into claude, and then tell me they’re just “making it more professional for me”. In front of me, on a screen share. The output didn’t even make structural sense and had conflicting information from the LLM. It was a slap in the face and now I don’t want to work with startups because apparently a lot of them are doing this to contractors.
All of these are examples that many people experience with me. They’re all examples of the same thing: “AI” as we are calling it is causing disruptions to the human experience because there’s nothing to regulate it. Companies are literally pirating your human experience to feed it into LLMs and generative tools, turning around and advertising the results as some revolutionary thing that will be your best friend, doctor, educator, personal artist and more. Going further, another person mentioned this, but it’s even weaponized. That same technology is being used to manipulate you, surveil you, and separate you from others to keep you in compliance with your running government, whether it be for good or bad. Not to mention, the ecological impact this has (all so someone can ask Gemini to generate a thank you note). Give the users & the environment more protections and give actual tangible consequences to these companies, and maybe I’ll be more receptive to “AI”.
I have friends that openly admit they’d rather use AI to generate “art” and then call people who are upset by this luddites, whiny and butt-hurt that AI “does it better”
Anybody who thinks AI does art “better” is someone whose opinions in all matters, big or small, can be safely dismissed.
I don’t hate AI, I hate the system that’s using AI for purely profit-driven, capitalism-founded purposes. I hate the marketers, the CEOs, the bought lawmakers, the people with only a shallow understanding of the implications of this whole system and its interactions who become a part of this system and defend it. You see the pattern here? We can take out AI from the equation and the problematic system remains. AI should’ve been either the beginning of the end for humanity in a terminator sort of way, or the beginning of a new era of enlightenment and technological advancements for humanity. Instead we got a fast-tracked late stage capitalism doubling down on dooming us all for text that we dont have to think about writing while burning entire ecosystems to achieve it.
I use AI on a near daily basis and find usefulness in it, it’s helped me solve a lot of issues and it’s a splendid rubber ducky for bouncing ideas, and I know people will disagree with me here but there are clear steps towards AGI here which cannot be ignored, we absolutely have systems in our brains which operate in a very similar fashion to LLMs, we just have more systems doing other shit too. Does anyone here actually think about every single word that comes out of their mouths? Has nobody ever experienced a moment where you clearly said something that you immediately have to backtrack on because you were lying for some inexplicable reason, or maybe you skipped too many words, slurred your speech or simply didn’t arrive anywhere with the words you were saying? Dismissing LLMs as advanced autocomplete absolutely ignores the fact that we’re doing exactly the same shit ourselves, with some more systems in place to guide our yapping.
No. Copyright should be consistently enforced, pollution should be taxed, privacy should be protected, sites shouldn’t be DoSed, but other than that I think it’s kinda nifty on its own.
I do not hate AI, because it doesn’t exist. I’m not delusional.
I do resent the bullshit generators that the tech giants are promoting as AI to individual and institutional users, and the ways they have been trained without consent on regular folks’ status updates, as well as the works of authors, academics, programmers, poets, and artists.
I resent the amount of work, energy, environmental damage, and yes, promotional effort that has gone into creating an artificial desire for a product that a) nobody asked for, and b) still doesn’t do what it is claimed to do.
And I resent that both institutions and individuals are blindly embracing a technology that at every step from its creation to its implementations denigrate the human work — creative, scholarly, administrative and social — that it intends to supplant.
But Artificial Intelligence? No such thing. I’ll form an opinion if I ever see it.
While I haven’t thought about that before, now that I have, I totally agree. Ty fir sharing your pov :)
While I completely agree with most of this, it’s my understanding that we have is a type of AI, as is AGI. LLMs are classified as Narrow AI.
What he means is that he doesn’t hate A.I because it simply doesn’t exist. There is no intelligence in any of the so called “A.I” since all it’s doing is a combination of stolen training data + randomness
Yeah, I can understand the sentiment. I was just clarifying that true intelligence (AGI) is a subset of what we refer to as AI, alongside other subsets such as Narrow AI/LLMs. I agree it’s odd usage of the term, but I can’t find a source otherwise.
If you’re talking about reading machine generated text then I’m to fucking old to eat a corporate propaganda. What’s difference between AI and TV ? You can’t turn off AI without turning off TV these days.
That’s sad.I didn’t hate AI (or LLMs whatever) at first, but after becoming a teacher I REALLY FUCKING HATE AI.
99% of my students use AI to cheat on any work I give them. They’ll literally paste my assignment into ChatGPT and paste ChatGPT’s response back to me. Yes, I’ve had to change how I calculate grades.
The other super annoying part of AI is that I often have to un-teach the slop that comes from AI. Too often it’s wrong, I have to unteach the wrong parts, and try to get students to remember the right way. OR, if it’s not technically wrong, it’s often wildly over-complicated and convoluted, and again I have to fight the AI to get students to remember the simple, plain way.
The other thing I’ve heard from peers, is that parents are also using ChatGPT to try to get things from schools. For example, some student was caught cheating, got in trouble, but the parent was trying to use some lawyer-sounding ChatGPT argument to get the kid out of trouble. (They’ve met the parent before and the email seems wildly out of character.) Or in another instance, a parent sent another lawyer-sounding ChatGPT email to the school asking for unreasonable accomodations, demanding software that doesn’t even make sense for the university major.
They’ll literally paste my assignment into ChatGPT and paste ChatGPT’s response back to me.
I solved a similar problem when teaching EFL (students just pasting assignments written in Chinese into a translator) by making them read select paragraphs out loud to me. You can rapidly spot the people who have no idea what the words they’re reading mean (or in my case are even pronounced on top of that!) and …
Well, cheating gets you 0.
My kids’ teacher had a great teaching moment. He had the kids write an outline, use ChatGPT to write an essay from their outline, then he graded them on their corrections to the generated text
We used to be too scared to tell our parents if we got in trouble, we’d always get in so much shit for it. (And this was late 90s/early 00s, it’s not like we were getting beatings).
What’s up with parents trying to get their kids out of trouble instead of going ham on them.
Oh yeah i hate “AI”
It’s a lie
It’s not intelligent, it’s not even new tech
LLMs were improved via semantic analysis instead of lexical and then they could mimick/break down human speech well enough to translate and communicate.
Then marketing teams looked at this autocomplete bot that can mimick a person’s speech and said “Yep, we can trick people into thinking this is Artificial Intelligence”
And this is what our Capitalist overloards are now doubling down on instead of us having a world where we actually understand what AI is and want real technological advancements to happen that lead to intelligent digital life.
Also it’s incredibly energy inefficient, each datacenter for these things is already sucking up a city’s worth of power just for us to use it to steal other people’s work and ideas.
I hate the way people talk about AI. They’re going crazy at work pushing people to use it for everything while you have people doing less and less more poorly claiming they save 50%. Everyone is more efficient and has greater efficacy while nothing gets done.
AI is a great tool for programming - next step up from autocomplete, search, and templates. It can be useful to speed up some coding tasks but it’s rarely a final result. So I’m the Luddite where we have some non-technical manager trying to guide a bunch of us on zoom through a coding task with ai. Not only will she not listen but the only result so far has been clarifying that she needs to define requirements
And I almost missed lunch today spending almost two hours on a code review from someone who clearly had ai generate it and never tried to build, never wrote tests much less ran them, never actually put any thought into cleaning up the generated stuff to actually make it work. He had a bit of what I swear is Perl in his Java “code”.
I had to feed a write up through ai to translate it to something that “saved me time” by looking like ai wrote it