If today was ten years ago, this article would be excellent science fiction. It’s long, and written by someone I’d like to punch in the head, but it’s gotta be read and I couldn’t stop.

Anyone who wants to debunk it, tell me it’s all wrong, I’d sure appreciate that so please do, because it reads like the end of everything.

This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.

… Imagine it’s 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?

  • DABDA@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 hours ago

    Since I already started typing my hot take reaction I’ll just post it as is, but seeing as Ed Zitron has addressed this I’d definitely read his take as it’s surely better sourced and professionally written :)


    I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just… appears. Not a rough draft I need to fix. The finished thing. I tell the AI what I want, walk away from my computer for four hours, and come back to find the work done. Done well, done better than I would have done it myself, with no corrections needed. A couple of months ago, I was going back and forth with the AI, guiding it, making edits. Now I just describe the outcome and leave.

    Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.

    I’m not exaggerating. That is what my Monday looked like this week.

    I made it to about here and then decided I couldn’t suspend disbelief enough to continue. The way this author describes it we’re now at the point where you can just prompt, “I want a native Linux version of Adobe Photoshop/Solidworks/whatever-the-fuck-MS-Office-is-now-called,” and it should be able to make that happen.

    Now that I think more about it, why should anybody even care if AI can write code? You wouldn’t tell it to make accounting software for your business, you’d tell it to do your business accounting and not care how it was doing so.

    In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.

    By 2023, it could pass the bar exam.

    By 2024, it could write working software and explain graduate-level science.

    By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

    1. Prior to 2022 our calculators could reliably do arithmetic, but would also display an error when asked to do nonsensical operations. Are the amazing models the author describes now consistently able to resist efforts to force a response where a correct one is impossible? I suspect with the right prodding it’ll still declare a winner in unstoppable vs. immovable etc.

    2. So am I not supposed to assume that a model was trained on the bar exam and then asked to complete a test using the same material? I wouldn’t be impressed if you told me a child passed the bar exam by using the answer key either.

    3. I read this as “working software” = capable of “Hello World!”, and “explain graduate-level science” = can quote relevant blocks of text sourced from wikis and scholarly docs. Could it attempt to plausibly explain anything novel that is science related, or only repeat already published and understood things?

    4. If the best engineers in the world already handed over most of their coding work then why are any engineers still employed anywhere? How is just replacing customer service reps with AI agents working out so far?

  • Laser@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    17 hours ago

    First, I don’t like the implication there that AI is “thinking”. It doesn’t equate.

    Second, while they might get better, they’re not as good as the author claims, and his evidence is anecdotal at best. I can give anecdotal evidence as well: we just let go of a trial hire because their code quality was so bad (turns out they used AI). Another one of my coworkers sometimes uses AI to write nix for my work projects and it’s always needlessly verbose, and it also sometimes leads to actually useless code that’s already implemented better in a module that we use; also when asked if my modules are correct, the AI says they’re not, while they actually are (just not structured like most modules you find). In most cases, usage of AI has increased the workload for others. And these are examples from 2026 mind you.

    I already mentally skip AI summaries because chances are they’re wrong; obviously, this is true for everything, but it’s been painfully ridiculous with AI. An example from my life, I used Google to search for “is train X on time”, with one of the first results being a third party site tracking all delays for that line and correctly showing that it was delayed by an hour. Meanwhile, the AI assured me it was on time. Now you might say “of course, it can’t reflect such recent events” (which then wouldn’t be agentic, but whatever…), but then why present me with the factually wrong information in the first place?

    We’ll see how all this turns out.

    • Doug Holland@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      Whoops, my original comments above were written for my blog, where everyone knows I’m anti-AI, so yeah lemme state clearly, I don’t buy into the author’s subtext. It’s definitely more gee-whiz and “AI is thinking” than I’d endorse. To me the article is a report from behind enemy lines, telling me that even the enemy is starting to worry.