

I disagree with their conclusions about the ultimate utility of some of these things, mostly because I think they underestimate the impact of the problem. If you’re looking at a ~.5% chance of throwing out a bad outcome we should be less worried about failing to filter out the evil than with just straight-up errors making it not work. There’s no accountability and the whole pitch of automating away, say, radiologists is that you don’t have a clinic full of radiologists who can catch those errors. Like, you can’t even get a second opinion if the market is dominated by XrayGPT or whatever because whoever you would go to is also going to rely on XrayGPT. After a generation or so where are you even going to find much less afford an actual human with the relevant skills?This is the pitch they’re making to investors and the world they’re trying to build.
Contra Blue Monday, I think that we’re more likely to see “AI” stick around specifically because of how useful Transformers are as tool for other things. I feel like it might take a little bit of time for the AI rebrand to fully lose the LLM stink, but both the sci-fi concept and some of the underlying tools (not GenAI, though) are too robust to actually go away.