I work in a software house where everyone uses AI. Some of them can’t even write a single line of code, let alone analyze it. I was shocked when I saw their work is CTRL-A, CTRL-V into Claude and CTRL-V into the IDE without a single neuron being activated. They even ask it to summarize and generate a response for 3 lines of text in a Slack message! (Partially because they don’t know what they’re doing and partially because they’re too lazy to think).

Well, everyone talks enthusiastically about AI, some have unrealistic expectations (thinking that it’s actually intelligent, when it is not) but what bothers me is that they’re indeed faster than me so sometimes I think “why am I even resisting?”. Well the answer is that I love to keep my brain active and having the control of what I’m doing. Does anyone else feel kinda similar? Am I in the wrong?

P.S. Also I just want to point out that I’ve seen with my own eyes the deterioration of cerebral functions in people who heavily rely on AI. I’m not talking about just “forgetting how to code” but I see them losing space awareness (invading personal space, sitting like a liquid on the chair), self awareness (loudly burping, hoarding half-drank bottles of water on the desk), focus and they’re easily irritable. It’s multiple people behaving like that and they weren’t like this before. AI is a drug.

  • whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    5
    ·
    16 hours ago

    It’s important to know what your boundaries are and why they are there before someone else chooses them for you. Anthropomorphizing programs, especially LLMs that appear more human, is dangerous and has plagued AI research and tech since it’s beginning. It doesn’t make decisions or think, it predicts token sequences. It’s a guessing machine. It can’t be responsible, legally or morally, for what it produces. People have already tricked themselves into giving agents full access that have destroyed data and systems and lives by misunderstand what these tools are and how they work.

    And that’s not including the ethical implications of more data centers hurting communities and nature, ram and disk shortages, more economic and political instability by further wealth inequality these systems are monetized under, etc. When it’s you and whether you’ll keep your job or not and have people depending on you it’s hard to hold fast, and if you can’t find tech jobs without being forced to use unethical tech should or can you change careers.

  • IndigoGolem@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    17 hours ago

    At least your workplace isn’t yet forcing you to use Al instead of your own brain. Some people are at the point of having to fake Al contributions, like Dan Q here.

  • jimothysupreme@lemmy.org
    link
    fedilink
    arrow-up
    2
    ·
    19 hours ago

    I still feel like AI is a good tool for brainstorming (so long as you keep a critical eye on it) and the occasional menial task (e.g., a spreadsheet where you already have the data and just have to input it with the right formulas), but that’s pretty much it. People are abusing it like crazy.

  • Pommes_für_dein_Balg@feddit.org
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    2 days ago

    If that is accepted work culture, I’d update my resume. One day everything will break and no one will know how to fix it.
    Then they’ll turn to you and expect you to debug the entire slopbase under extreme time pressure and if you can’t do it you’ll be blamed for the outage.

    • BradleyUffner@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      If that is accepted work culture, I’d update my resume.

      I’ve got bad news for you about the current job market.

  • mimic_dev@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    I recently got a programming job and everyone I work with hates having to use AI but it keeps getting forced on us. We’re supposed to allot 2-4 hours a week of ai use at a minimum. I got this job because I love programming and solving problems. What’s the point of I’m expected to use a stupid robot?

  • WhyIHateTheInternet@lemmy.world
    link
    fedilink
    arrow-up
    65
    ·
    2 days ago

    Drugs are awesome. Until they’re not. I feel bad for these people honestly. I don’t code or anything but my sister-in-law is one of these AI addicts and she’s not even working anywhere. It’s insane how quickly she became completely worthless. She used to be creative and artistic now she’s a braindead lump who thinks she is way smarter than everyone else because she can prompt.

    • NateNate60@lemmy.world
      link
      fedilink
      arrow-up
      29
      ·
      2 days ago

      The first day you use an LLM for your work will be the single most intelligent, productive day in your entire working career. That’s it. That will be the peak.

      Your productivity and intelligence will only decline from there. Until it reaches functionally zero.

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      Thats the thing. Do they think they are smart for typing a prompt? It takes less than 0 skill.

      I have a friend like this that is so dumb with tech but thinks he is advanced because he asked grok something. Baffling to me.

      I guess the people drinking water from lead pipes thought they were smart too…but they really didn’t know. If you aren’t seeing the red flags about llms you’re dumber than people with lead pipes.

  • StopTech@lemmy.today
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    That’s one reason. Another reason is that your code will be better and you will understand it. Yet another reason is you’ll have more privacy. And probably the most important reason is that you’re resisting AI development that threatens human existence.

  • theparadox@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    2 days ago

    You are not alone. I feel like the world is quickly giving in to LLMs and I’m one of the very rare holdouts. My nephews, my coworkers, my bosses… all of them use a mix of ChatGPT, Gemini, and/or Claude regularly. Hell, even my therapist tells me his wife uses ChatGPT for everything. I remember being worried when kids would immediately answer questions with some obnoxious response akin to “just Google it”. I wondered if abandoning the need to remember anything would impact development. Now they instantly go to a chatbot.

    I’ve tried it a few times with difficult problems and always found hallucinations. If I’m looking for something that doesn’t exist, the LLM has always made up a convincing answer. It’s frightening how so many people trust it blindly.

    I work in US Public Education and the adoption of AI in this space scares the living fuck out of me. I understand the argument: “Kids are using it, or are going to use it. We need to get out in front of it.” That’s fine. Find a provider that “promises” not to use student data. Protect PII. Great.

    But some district admins are enthusiastically using it. I literally mentioned in conversation that I needed to check when something was due for the state and they immediately asked Gemini and assumed the answer was correct. Another one 100% uses it for letters summarizing student performance and freaks out when ChapGPT is down. I can only imagine how horrific it must be in the private sector where the goal is efficiency and profit over everything.

    This shit needs to pop, and fast.

  • Hiro8811@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    I think this is caused by short video formats getting the dopamine baseline so high that solving a problem gives then no joy anymore so they just use ai to do the minimum required.

    Also electronic cigarates, don’t really have anything to do but since they don’t cause so much smoke idiots are using them everywhere busses, offices, waiting rooms, they’re worse than regular smokers.

  • Maki@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    35
    ·
    2 days ago

    Yes, they might be faster. They’re also sloppier. I’m just waiting for a multinational to get hit with the biggest crash they don’t know how to recover from, caused by ai slop. The bubble needs to pop and it will be messy and painful.

    • Hiro8811@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      With the security ai code provides I’m wondering how many services got hacked, haveIbeenpawned will have a lot of work soon.

  • Jared White ✌️ [HWC]@humansare.social
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 days ago

    The important thing to remember is you’re not alone in feeling this way, and you’re not imagining things. Cognitive decline is measurable is multiple studies now and will affect different people in small or large measure. In the worst cases, antisocial behavior creeps in and can lead to dangerous psychosis.

    You are not is the wrong! To the extent you can maintain your sanity and your skillset, you will become the most valuable member of your team (even if they’re too “stoned” to recognize it at present).

    • bridgeenjoyer@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      I just wish i had 1 IRL person who had critical thought. They are all addicted to slop and won’t listen to you if you explain anything about it. Its saddening.

  • Artwork@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    2 days ago

    Dear @tired_n_bored@lemmy.world, please accept a warm hug, and a great high-five! I believe you are a true, accountable, and actual developer, who believes in purpose and self-confidence! This is what is important. To develop not only some software but your own mind and yourself as a human individual, with unique and ineffably magnificent mind.

    Just like if you want nonsensical articles with invented facts, then article writing is a practical use case. But as I’ve pointed out already no reputable editorial is now using LLMs to write their articles…
    Source: https://lemmy.world/comment/22132775

    -–

    The point is, there has always been a trade-off between the speed of development and quality of engineering…
    Source: https://lemmy.world/comment/22351660

    -–

    “I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer but nothing else (keep in mind we are assuming that it’s a good answer).

    You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it…
    Source: https://lemmy.world/comment/22522382

    It’s a freaking mess, indeed. I am in a web and network security developer myself, since ~2000. And I do have some slight panic feel myself, trying to learn about it when time is available, with just a very little use of AI at work to almost nothing. And I do eliminate anything AI from personal life, including art and hobby - I will always do so.
    Here, even if someone pushes you for to it, please do keep standing your point respectfully prioritizing confidence and self-awareness. Since, in the end, we must always stay human, I believe. Please don’t lose yourself in the trends and unstable technology. As we know, what is important is a human to stay a responsible, neat, respectful, and love-felt human, for the sake of magnificent future, your family, and your love for people, purpose, and art.

    I believe it’ll be alright, but we must always be careful, as developers, and people.
    And, sure, I wish you peace, stability, organization, and success!

    Related: https://youtu.be/dbMXi9q78Tk (I almost quit YouTube… In this video, you’ll hear my honest take on AI burnout, the viral Matt Schumer article, OpenClaw, tech layoffs in 2026, and why the anxiety is the real problem… )

    • Today@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Just had this conversation with my honey. People think it’s intelligent and it’s really just data gathering, which is only as good as the data, and it doesn’t discern facts from crap. Search engines have become so horrible. It now takes three times as long to wade through the bs to find an actual answer.

  • thisbenzingring@lemmy.today
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    today I had a problem with one of my coworkers anxiety about a task and they were fixated on this upgrade they did in the past. They are always using AI to format their emails and stuff. Its clear because of the annotations in email vs when they teams chat. So they are giving me grief about this task and I keep telling them they are not upgrading they are exporting and importing. It becomes such a deal that we have a supervisor and manager meeting about it. This person is so wrapped up in AI sometimes that I don’t think they can do the job at a basic level.

  • ivanvector@piefed.ca
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 days ago

    Definitely not a waste to keep your skills sharp. You will be needed when the inevitable crash comes and nobody knows how to even troubleshoot, let alone code a fix.