To be honest, I think we’re losing credibility. I don’t know what else to put in the description.
It’s a tool. It has uses. But do I need it in every single thing. No. Especially when 90% of the time the AI features are half baked and crammed down your throat. Like windows and copilot.
My thoughts are that the USA is in a far worse position now to shoulder and recovery from the coming bubble pop, crash, and financial crisis that the mass implementation of AI is about to cause than they were in 2008-2009 when the last crash hit.
Could be wrong though. Maybe all the datacenters will get built on time, and be powered by a sudden breakthrough in nuclear fission, and maybe ~44% of people on the earth will sign up to paid plans with OpenAI so that they can become profitable.
It’s the biggest bubble in history because corporate leadership consistently falls for the reification fallacy.
So-called “AI” (specifically, large language models) are massively-multidimensional maps of human language use. They can be used to draw humanlike vectors through the phase space of all possible combinations of symbols, but they aren’t intelligent because human intelligence doesn’t come from the use of language. Rather, language comes from intelligence.
AI don’t like it.
LLMs: flawed tools with potential but unfortunately vastly overhyped confidence in their abilities.
Audiovisual AI – deepfakes, AI generated art and music, AI facial and whole-body recognition, etc: no. Just no. Nothing good has, will, or can come of this, I’m quite certain.
I liked ai at the start, and as a concept. But now i hate it due to its implementation, and usage. So ai shouldnt be anywhere.
The only decent use of LLMs I have seen is a bot on Mastodon that generates alt-text automatically for posts of users who opt-in, and shows how much power was used to generate the text.
Other than that use case, I hate LLMs. They have never been useful for me and it just seems like overkill for most things, a search engine would do fine, you just have to use your brain a little bit.
Image generation also sucks. Stock images exist and you can just draw, take a photo, hire someone, etc. AI makes people less willing to learn because they think that AI can just do it better, it makes them miss out on the joy of art.
AI’s job is to lie about being human. I fundamentally disagree with that and think its existence is unnaceptable. If AI-generated output was somehow always tagged as AI-generated, then maybe I would change my mind about this, but this is completely unenforceable because AI copies and mimics human output.
Stop it. Get some help.
It has it’s uses.
For examole I used it to find a movie based solely on a description of a scene.
It found a fitting one but not from the year I remember it from. After I specified that, I found it.Same for generation of a script to tweak later on. Will it run efficient? Nope. Will it run? Probably and good enough as a PoC.
It’s making hobbyist computing expensive, it’s potentially eliminating some of the few actually enjoyable jobs (art, creative works), it’s making websites and applications less secure with vibe coding, and it’s allowing for even more convincing propaganda/bad faith actors to manipulate entire populations…
But hey, at least Elon Musk gets to make naked pictures of kids and still be a billionaire. So there’s that.
In most contexts it’s trying to solve problems that are better solved by other tools. Automation scripts are more consistent than AI, for example, and automation scripts are pretty easy to set up now.
In some contexts it’s trying to solve problems that don’t exist. AI generated memes sit there for me.
Other contexts just… Make me scratch my head and go why. Why do you need an AI summary of a book? Why are you trying to make a leisure activity more efficient? Same with writing fanfiction. I can at least understand why people want to pump out books to sell, but you literally cannot sell this. Writing fanfiction is a leisure activity, why are you trying to automate it?
Why is it baked into my search engine? It’s wrong on anything but the most common searches, and even then it’s not reliable enough to trust. My job recently baked an AI into the search, and most of the time it spits out absolute nonsense, if not flat telling us to break laws, and then citing sources that don’t even say what it’s saying.
Most of the marketing around it is stuff like
- “Generate a meme!” I have literally never once wanted to
- “Summarize a book!” I am doing this for fun, why would I want to?
- “Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.
- “Summarize your texts, and write responses automatically!” Why would anyone want to automate their interpersonal relationships?
- “Talk to this chatbot!” Why? I have friends, I don’t need to befriend a robot.
- “Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.
And I just. Don’t get it. Don’t get me wrong, I have tried. I’ve tried to get it to work for me. I’ve succeeded once, and that was just getting the jq command to work how I wanted it to. Tried a few more times, and it’s just… Not good? It really doesn’t help that every respected computer scientist is saying they likely can’t get much better than they are.
It’s an overhyped hammer that’s doing a bad job at putting soup in my mouth, and on the way it’s ruining a lot of lives, and costing a lot of money for diminishingly better results.
“Write code without learning it!” I get it. I’ve struggled learning to program for 10 years. But every time I hear a programmer talk about AIGen code, it’s never good, and my job’s software has gotten less stable as AIGen code as been added in.
I’m similarly dubious about using LLMs to do code. I’m certainly not opposed to automation — software development has seen massive amounts of automation over the decades. But software is not very tolerant of errors.
If you’re using an LLM to generate text for human consumption, then an error here or there often isn’t a huge deal. We get cued by text; “approximately right” is often pretty good for the way we process language. Same thing with images. It’s why, say, an oil painting works; it’s not a perfect depiction of the world, but it’s enough to cue our brain.
There are situations where “approximately right” might be more-reasonable in software development. There are some where it might even be pretty good — instead of manually-writing commit messages, which are for human consumption, maybe we could have LLMs describe what code changes do, and as LLMs get better, the descriptions improve too.
This doesn’t mean that I think that AI and writing code can’t work. I’m sure that it’s possible to build an AGI that does fantastic things. I’m just not very impressed by using a straight LLM, and I think that the limitations are pretty fundamental.
I’m not completely willing to say that it’s impossible. Maybe we could develop, oh, some kind of very-strongly-typed programming language aimed specifically at this job, where LLMs are a good heuristic to come up with solutions, and the typing system is aimed at checking that work. That might not be possible, but right now, we’re trying to work with programming languages designed for humans.
Maybe LLMs will pave the way to getting systems in place that have computers do software engineering, and then later we can just slip in more-sophisticated AI.
But I don’t think that the current approach will wind up being the solution.
“Summarize a book!” I am doing this for fun, why would I want to?
Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.
“Generate any image!” I get the desire, but I can’t ignore the broader context of how we treat artists. Also the images don’t look that great anyway.
I think that in general, quality issues are not fundamental.
There are some things that we want to do that I don’t think that the the current approaches will do well, like producing consistent representations of characters. There are people working on it. Will they work? Maybe. I think that for, say, editorial illustration for a magazine, it can be a pretty decent tool today.
I’ve also been fairly impressed with voice synth done via genAI, though it’s one area that I haven’t dug into deeply.
I think that there’s a solid use case for voice query and response on smartphones. On a desktop, I can generally sit down and browse webpages, even if an LLM might combine information more quickly than I can manually. Someone, say, driving a car or walking somewhere can ask a question and have an LLM spit out an answer.
I think that image tagging can be a pretty useful case. It doesn’t have to be perfect — just be a lot cheaper and more universal than it would to have humans doing it.
Some of what we’re doing now, both on the part of implementers and on the R&D people working on the core technologies, is understanding what the fundamental roadblocks are, and quantifying strengths and weaknesses. That’s part of the process for anything you do. I can see an argument that more-limited resources should be put on implementation, but a company is going to have to go out and try something and then say “okay, this is what does and doesn’t work for us” in order to know what to require in the next iteration. And that’s not new. Take, oh, the Macintosh. Apple tried to put out the Lisa. It wasn’t a market success. But taking what did work and correcting what didn’t was a lot of what led to the Macintosh, which was a much larger success and closer to what the market wanted. It’s going to be an iterative process.
I also think that some of that is laying the groundwork for more-sophisticated AI systems to be dropped in. Like, if you think of, say, an LLM now as a placeholder for a more-sophisticated system down the line, the interfaces are being built into other software to make use of more-sophisticated systems. You just change out the backend. So some of that is going to be positioning not just for the current crop, but tomorrow’s crop of systems.
If you remember the Web around the late 1990s, the companies that did have websites were often pretty amateurish-looking. They were often not very useful. The teams that made them didn’t have a lot of resources. The tools to work with websites were still limited, and best practices not developed.
https://www.webdesignmuseum.org/gallery/year-1997
But what they did was get a website up, start people using them, and start building the infrastructure for what, some years later, was a much-more-important part of the company’s interface and operations.
I think that that’s where we are now regarding use of AI. Some people are doing things that won’t wind up ultimately working (e.g. the way Web portals never really took over, for the Web). Some important things, like widespread encryption, weren’t yet deployed. The languages and toolkits for doing development didn’t really yet exist. Stuff like Web search, which today is a lot more approachable and something that we simply consider pretty fundamental to use of the Web, wasn’t all that great. If you looked at the Web in 1997, it had a lot of deficiencies compared to brick-and-mortar companies. But…that also wasn’t where things stayed.
Today, we’re making dramatic changes to how models work, like the rise of MoEs. I don’t think that there’s much of a consensus on what hardware we’ll wind up using. Training is computationally expensive. Just using models on a computer yourself still involves a fair amount of technical knowledge, the sort of way the MS-DOS era on personal computers prevented a lot of people from being able to do a lot with computers. There are efficiency issues, and basic techniques for doing things like condensing knowledge are still being developed. LLMs people are building today have very little “mutable” memory — you’re taking a snapshot of information at training time and making something that can do very little learning at runtime. But if I had to make a guess, a lot of those things will be worked out.
I am pretty bullish on AI in the long term. I think that we’re going to figure out general intelligence, and make things that can increasingly do human-level things. I don’t think that that’s going to be a hundred years in the future. I think that it’ll be sooner.
But I don’t know whether any one company doing something today is going to be a massive success, especially in the next, say, five years. I don’t know whether we will fundamentally change some of the approaches we used. We worked on self-driving cars for a long time. I remember watching video of early self-driving cars in the mid-1980s. It’s 2026 now. That was a long time. I can get in a robotaxi and be taken down the freeway and around my metro area. It’s still not a complete drop-in replacement for human drivers. But we’re getting pretty close to being able to use the things in most of the same ways that we do human drivers. If you’d have asked me in 2000 whether we would make self-driving cars, I would say basically what I do about advanced AI today — I’m quite bullish on the long-term outcome, but I couldn’t tell you exactly when it’ll happen. And I think that that advanced AI will be extremely impactful.
Summarizing text — probably not primarily books — is one area that I think might be more useful. It is a task that many people do spend time doing. Maybe it’s combining multiple reports from subordinates, say, and then pushing a summary upwards.
The problem I have with summarizing text is that it does often miss key features. Without using books as an example, for my work we have a knowledge base that we reference for things. We work in all 50 states, and the laws vary, and the AI will very frequently quote the wrong state’s laws, or tell us to do something possible in one state, but not in others. Could this get better? Maybe, but I’m not super convinced.
The rest of the comment isn’t exactly disagreeable, I’m just also concerned of the social costs. Not just for things like lost jobs, those always happen when new things come in. It sucks, but we do move on, and entire professions have been forgotten because they were automated long ago. A lot of the opinions I have about AI are a bit reactionary, but at the same time headlines like “AI chatbot talks child into suicide, and it’s really easy to get it to do that” is. Y’know. Not a great thing to read, especially when the tech is steeped in controversy in all directions. Copyright (which isn’t an issue they’ll ever get past without massive changes, and scrapping entire models), bringing smaller sites down with extensive scraping, job loss, environmental concerns (however overblown they may or may not be), increasing utility bills for areas, leading to the RAM shortage… It’s a whole lot of bad stuff, all for something that, largely, people don’t want, and is being forced into every aspect of our daily lives.
All this for something that people largely don’t want. I don’t even remember this many people being this anti-internet/computers. At worst I remember articles talking about how it’d be a passing fad. Granted I was a kid when the internet was really kicking off, but I was in an area where people were still mad about seatbelts, so I’d imagine at least a handful would’ve hated the internet if it were even half as bad as how little AI is wanted anywhere outside of CEO offices.
I’m sure AI will find some use-cases, I just don’t think they’re going to be user-facing at all, mostly due to how much they cost vs how much people will be willing to pay.
Solving a problem we ahouldnt have with a tool that works only some of the time.
Doesn’t even solve a problem.
We’ve passed peak innovation in American tech so we have to pretend that this is a product we want lest we realize that none of our shit had gotten any better for the last fifteen years.
we have to pretend that this is a product we want
ChatGPT alone has 800 million weekly users.
How many of them pay?
That’s irrelevant. Nobody pays for VLC player either.
Is chat gpt an open source project run by a single dude that doesn’t care if it makes money?
First you moved the goalposts by pivoting from “there’s no want for LLMs” to “okay but how many are paying.” You quietly shifted the entire criteria of “want” from voluntary demand to monetization the second evidence of massive adoption showed up.
When I pointed out that VLC has hundreds of millions of users who also don’t pay, you tossed in the irrelevant “it’s open source by one person” line - which is a complete non sequitur. Development model or monetization status has zero logical bearing on whether 800 million weekly ChatGPT users demonstrate real desire for LLMs.
This is classic bad-faith argumentation: throw in red herrings, change the standard whenever your position weakens, and misrepresent what was actually said to avoid engaging with the actual evidence.
Removed by mod
AI is a tool, it can be used for good, and it can be used for bad. Right now, the business world is trying to find ways to make it work for the business world - I expect 95 percent of these efforts to die off.
My preference and interest is in the local models - smaller, more specialized models that can be run on normal computers. They can do a lot of what the big ones do, without being a cloud service that hrvests data.
GIGO in its purest form.
As much as people on the Fediverse or Reddit or whatever other social media bubble we might be in like to insist “nobody wants this” or that AI is useless, it actually is useful and a lot of people do want it. I’m already starting to see the hard-line AI hate softening, more people are going “well maybe this application of AI is okay.” This will increase as AI becomes more useful and ubiquitous.
There’s likely a lot of AI companies and products starting up right now that aren’t going to make it. That’s normal when there’s a brand new technology, nobody knows what the “winning” applications are going to be yet so they’re throwing investment at everything to see what sticks. Some stuff will indeed stick, AI isn’t going to go away. Like how the Internet stuck around after the Dot Com bust cleared out the chaff. But I’d be rather careful about what I invest in myself.
I’m not a fan of big centralized services and subscriptions, which unfortunately a lot of the American AI companies are driving for. But fortunately an unlikely champion of AI freedom has arisen in the form of… China? Of all places. They’ve been putting out a lot of really great open-weight models, focusing hard on getting them to train and run well on more modest hardware, and releasing the research behind it all as well. Partly that’s because they’re a lot more compute-starved than Western companies and have no choice but to do it that way, but partly just to stick their thumb in those companies’ eyes and prevent them from establishing dominance. I know it’s self-interest, of course. Everything is self-interest. But I’ll take it because it’s good for my interests too.
As for how far the technology improves? Hard to say. But I’ve been paying attention to the cutting edge models coming out, and general adoption is still way behind what those things are capable of. So even if models abruptly stopped improving tomorrow there’s still years of new developments that’ll roll out just from making full use of what we’ve got now. Interesting times ahead.









