Related:

This is in a PR where Shougo, another long-time contributor, communicates entirely in walls of unparseable AI slop text: https://github.com/vim/vim/pull/19413

Thank you for the detailed feedback! I’ve addressed all the issues:

Thank you for the feedback! I agree that following the Vim 8+ naming convention makes sense.

Thank you for the feedback on naming!

Thanks for the suggestion! After thinking about this more, I believe repeat_set() / repeat_get() is the right choice:

Thank you for the feedback. A brief clarification.

https://hachyderm.io/@AndrewRadev/116176001750596207

@AndrewRadev@hachyderm.io

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    187
    arrow-down
    2
    ·
    2 days ago

    I spent literally all day yesterday working on this:

    https://sciactive.com/human-contribution-policy/

    I’ve started to add it to my projects. Eventually, it will be on all of my projects. I made it so that any project could adopt it, or modify it to their needs. It’s got a thorough and clear definition of what is banned, too, so it should help any argument over pull requests.

    Hopefully more projects will outright ban AI generated code (and other AI generated material).

    • Bibip@programming.dev
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      4 hours ago

      hi, i have strong feelings about the use of genai but i come at it from a very different direction (story writing). it’s possible for someone to throw together a 300 page story book in an afternoon - in the style of lovecraft if they want, or brandon sanderson, or dan brown (dan brown always sounds the same and so we might not even notice). now, the assumption that i have about said 300 pager is that it will be dogshit, but art is subjective and someone out there has been beside themselves pining for it.

      but this has always been true. there have always been people churning out trash hoping to turn a buck. the fact that they can do it faster now doesn’t change that they’re still in the trash market.

      so: i keep writing. i know that my projects will be plagiarized by tech companies. i tell myself that my work is “better” than ai slop.

      for you, things are different. writing code is a goal-oriented creative endeavor, but the bar for literature is enjoyment, and the bar for code is functionality. with that in mind, i have some questions:

      if someone used genai to generate code snippets and they were able to verify the output, what’s the problem? they used an ersatz gnome to save them some typing. if generated code is indistinguishable from human code, how does this policy work?

      for code that’s been flagged as ai generated- and let’s assume it’s obvious, they left a bunch of GPT comments all over the place- is the code bad because it’s genai or is it bad because it doesn’t work?

      i’m interested to hear your thoughts

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 hour ago

        That’s a very good question, and I appreciate it.

        I put a lot of this in the reasoning section of the policy, but basically there are legal, quality, security, and community reasons. Even if the quality and security reasons are solved (as you’re proposing with the “indistinguishable from human code” aspect), there are still legal and community reasons.

        Legal

        AI generated material is not copyrightable, and therefore licensing restrictions on it cannot be enforced. It’s considered public domain, so putting that code into your code base makes your license much less enforceable.

        AI generated material might be too similar to its copyrighted training data, making it actually copyrighted by the original author. We’ve seen OpenAI and Midjourney get sued for regurgitating their training data. It’s not farfetched to think a copyright owner could go after a project for distributing their copyrighted material after an AI regurgitated it.

        Community

        People have an implicit trust that the maintainers of a project understand the code. When AI generated code is included, that may not be the case, and that implicit trust is broken.

        Admittedly, I’ve never seen AI generated code that I couldn’t understand, but it’s reasonable to think that as AI models get bigger and more capable of producing abstract code, their code could become too obscure or abstracted to be sufficiently understood by a project maintainer.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        ·
        1 day ago

        Ok, yeah, I’ll make a post for it.

        Feel free to share it anywhere. :)

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        85
        ·
        2 days ago

        Basically the best you can do is continue as normal, and if someone submits something that says it is or obviously is AI, point to this policy and reject it. Just having the policy should be a decent deterrent.

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        21
        arrow-down
        1
        ·
        edit-2
        1 day ago

        Same mindset as “You don’t need a perfect lock to protect your house from thieves, you just need one better than what your neighbors have.”

        If a vibecoder sees this they will not bother with obfuscation and simply move onto the next project.

      • Retail4068@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        76
        ·
        1 day ago

        No, it’s a prejudiced hot take that’s completely and utterly unenforceable which will be seen as some Luddite behavior in 10 years when everyone is using the tooling.

          • Retail4068@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            55
            ·
            1 day ago

            I did. And you’re worried about clankers being able to comprehend as well as a human 🤣, good Lord the bar is low.

            • Scubus@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              21 hours ago

              Ok that’s really funny and I do agree with you, but I think you might be coming at this a little… unhinged. The issue with this is that it is unenforceable and honestly somewhat pointless. If AI tools are not up to scratch, then that will always be reflected in the quality of the code. Bad code is bad code, it doesn’t matter what made it. A lot of people seem to think AI is synonomous with bad code, and if that is the case, simply ban bad code.

              The issue they are going to run into is twofold:

              Firstly, what qualifies as “using AI”? Admittedly I haven’t actually read their licensing, but I’m just going to take a guess and say that it bans all forms of AI used anywhere in production. Almost every compiler I use these days has auto predict. It’s rarely useful, but if it does happen to guess the rest of the code I was already going to type, and I accept that, did I use AI to assist my coding? Back in the day before it was an llm the auto predict was actually decent, so not all of them use AI. How would you even know whether your is AI or not?

              The second issue is an issue of foresight. When the AI tools do become up to scratch, that will be reflected in the quality of their code. Suddenly AI generated code is faster, more efficient, and easier to understand all simultaneously. Anyone using this license is effectively admitting that theirs is the inferior option.

              It’s always hilarious to me when people ask whether something is AI slop. I dunno man, has your ability to detect whether something is good been reduced to AI slop? If it’s good, it’s good. If it’s not, it’s not. Either you like it or you don’t. Feels very similar to transphobes saying they can always tell. If that’s true, and AI really is always going to worse, you should never have to ask whether something is AI slop, you should just be able to tell. Otherwise it’s just slop, no ai necessary.

          • Retail4068@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            7
            ·
            edit-2
            10 hours ago

            Yes it does. Folks who just want to screech went crazy. Like, two of you actually engaged and brought valid concerns. Y’all are a CRAZY prejudiced bunch and hate being called out just as much as the next shit flinging monkey tribe.

            You actually think Lemmy is better behaved 🤣🤣🤣🤣

    • thethunderwolf@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      “AI generated” means that the subject material is in whole, or in meaningful part, the output of a generative AI model or models, such as a Large Language Model. This does not include code that is the result of non-generative tools, such as standard compilers, linters, or basic IDE auto-completions. This does, however, include code that is the result of code block generators and automatic refactoring tools that make use of generative AI models.

      As “artificial intelligence” is not that well defined, you could clarify what the policy defines “AI” as by specifying that “AI” involves machine learning.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        “Generative AI model” is a pretty well defined term, so this prohibits all of those things like ChatGPT, Gemini, Claude Code, Stable Diffusion, Midjourney, etc.

        Machine learning is a much more broad category, so banning all outputs of machine learning may have unintended consequences.

  • chonglibloodsport@lemmy.world
    link
    fedilink
    arrow-up
    48
    arrow-down
    1
    ·
    1 day ago

    Shougo is Japanese. I’m guessing he communicates like that because he uses translation rather than trying to communicate in broken English.

  • maegul (he/they)@lemmy.ml
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    15
    ·
    2 days ago

    Couldn’t help but notice the casual gendering of Claude to “he” as well.

    Someone somewhere made the important observation not long ago that computer assistants tended to be gendered female when more like a secretary (Siri and Alexa) but now that AIs are “intelligent” and powerful … Claude now has to be a male.

    Especially weird (and telling?) when it is objectively gender neutral as it’s not human.

    • TheTechnician27@lemmy.world
      link
      fedilink
      English
      arrow-up
      71
      arrow-down
      9
      ·
      edit-2
      2 days ago

      Couldn’t help but notice the casual gendering of Claude to “he” as well.

      “Claude” is a male given name. If you think it’s actually a problem, blame Anthropic for giving their LLM a gendered name. I’ve never gendered AI assistants, but I’m not going to begrudge people who do when it’s in the name (or in the case of old Siri, the voice, which would later be the default rather than only option).

      Women named “Claude” exist, but they’re staggeringly outnumbered by men to a point where most people don’t even know of women named “Claude” – let alone would immediately associate it as masculine.

      • Natal@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        10 hours ago

        Claude is neutral and can be given to women too. Though it lost popularity over the male version. There is even a fruit called “La reine Claude” which back translates to the queen Claude. But yeah, Claude was male in my head too so I’m definitely guilty of that too despite knowing and actively trying not to anthropomorphize Large lying models

      • amino@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        31
        arrow-down
        5
        ·
        2 days ago

        it’s extremely telling however the shift in marketing. i don’t believe giving the coding plagiarism bot a male name is coincidental. most feminists would probably agree. we’ve known for decades that chatbots were given female names because they’re trying to reenact some tradwife fetish and attract a male audience

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          9
          ·
          edit-2
          2 days ago

          it’s extremely telling however the shift in marketing

          And your hypothesis doesn’t fall apart now why, exactly? AI assistants are more secretary-like than they’ve ever been. “Write me an email.” “Proofread my work.” Beyond that, people are using LLMs as substitutes for significant others.

          And yet now, Microsoft migrated “Cortana” to “Copilot”, Siri is more gender-neutral than ever, Alexa still exists off massive brand recognition, and other major AI services are called e.g. “ChatGPT”, “Claude”, “DeepSeek”, and “Grok”. Collectively, that’s gender-neutral.

          At most, the hypothesis used to be true but isn’t anymore, because you can literally make an LLM act like a tradwife now if you’re so debased inclined, yet the names are broadly neutral. The MIT Press has a good, lengthy article about the history of gender in speech synthesis, as an aside.

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 day ago

        Not blaming anyone, this is social commentary.

        But like the neutral “it” is right there.

        In a world that’s both charged around gender and pronoun usage, and focused on the nature and value of LLMs … I think it’s weird that there isn’t more commonly pushback enforcing the non-human neutral for the simple reason that it’s an objective fact amidst a swampy pool of (mis-)information synthesis.

        A little like the bechdel test, I feel like it’s the casualness and indifference around this gender bias (at least at the moment) that’s interesting and telling.

      • CXORA@aussie.zone
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        17
        ·
        2 days ago

        Yes… they chose to give the tool a male name. Did this need to be said ?

        • TheTechnician27@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          4
          ·
          edit-2
          2 days ago

          Yes, because the person I was replying to said:

          Couldn’t help but notice the casual gendering of Claude to “he” as well.

          “Casual gendering” is implying the Vim author calling Claude “he” was totally out of the blue. It’s not “casual”; it’s something Anthropic baked in by giving it a male name.

          • CXORA@aussie.zone
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            4
            ·
            2 days ago

            Casual doesnt mean “out of the blue”, it means reflexive or without effort.

            • TheTechnician27@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              2
              ·
              edit-2
              2 days ago

              Sure, I know what “casual” means and that out of the meanings, a more apt one I should’ve chosen would’ve been “incidental”. That doesn’t change my overall point that they’re putting the entire onus of the gendering on the author as though it isn’t the same as someone calling Alexa “she”.

              Replace this entire scenario with someone calling Alexa “she”: the accusation of “casual gendering” would obviously be ridiculous, because Alexa has a popular female given name.

    • GrindingGears@lemmy.ca
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Let’s not lose focus more on the more immediate concern here, that this person is using a human pronoun to describe a computer.

    • unknownuserunknownlocation@kbin.earth
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      2 days ago

      Let’s not over interpret things here. Siri and Alexa are both mainly voice assistants, or at least started out as such. Studies have been conducted that show people trust female voices more than male voices. So the choice of female voices was obvious, and having female names is nothing surprising.

      Also, Siri, Alexa and Cortana were seen as “intelligent” at the time, as well (or were supposed to be seen, depending on who you ask).

      • maegul (he/they)@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        Also, Siri, Alexa and Cortana were seen as “intelligent” at the time, as well (or were supposed to be seen, depending on who you ask).

        Intelligent for the time, sure, but ever pitched as doing more than a Secretary that never encroaches on or gets involved with your actual job and cognitive skills? Because that’s the divide that’s being enforced: women for the menial dumb tasks and men for the serious, difficult and actually valuable and important stuff.

    • Retail4068@lemmy.world
      link
      fedilink
      arrow-up
      23
      arrow-down
      14
      ·
      1 day ago

      Or maybe, just maybe, it has a guys name.

      Good Lord y’all made up since crazy shit to whine about.

    • xep@discuss.online
      link
      fedilink
      arrow-up
      10
      arrow-down
      4
      ·
      1 day ago

      Of all the problems with these things we’re taking issue with the naming?

      • Ether@aussie.zone
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 day ago

        Oh no! Another issue! I’m a jellyfish and can only respond to a limited number of stimuli at a time because I have not centralised nervous system capable of organising my critiques into diverse and disparate arguments! I can only talk about vanishingly simple problems that are one-dimensional enough for me to tunnel vision on repeating the same talking points, preferably no longer than a dozen syllables total to accomodate not having a long-term memory centre due to my aforementioned lack of a brain 🪼🥺

        I am very tired and have gone absolutely overboard on this comment, to the person I’m responding to pls don’t take this personally, more rational, less sleepy me doesn’t want to be a troll. But SERIOUSLY? You’re argument isn’t even “this isn’t a problem”, it’s “I can’t see the value in doing a full deconstruction of this novel ethical scenario and just want to be a sheep saying it’s bad for the reason my favourite shepherd says so, not because of healthy discussion of ALL the pros and cons.” Reminds me of those cringe posts from a couple months ago where people were saying “the epstein files are a distraction! don’t forget about my favourite political issue {insert valid issue}”. I’m going to be a hypocrite for a second bc this long arse comment is 1,000,000x worse than yours, but consider why you’re commenting before you hit post next time.

        • xep@discuss.online
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          That’s a lot of words you’re putting my mouth, and a lot of names you’re calling a stranger on the internet. But you seem like an alright person, so I hope your day gets better.

  • hayvan@piefed.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    3
    ·
    2 days ago

    The devs do have my sympathy, they dedicate their time and energy for these projects and start burning out.
    The solution obviously shouldn’t be drowning it on slop. They should be just slowing down. Vim has been an excellent and functional tool for many years now, it doesn’t need more speed.
    There are better ways to use LLMs as a productivity tool.

    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      arrow-up
      57
      arrow-down
      3
      ·
      edit-2
      1 day ago

      I see this excuse of burn out every time it comes to LLM use, but i honestly do not buy it. You cant tell me every other dev out there just burnt out at the same time in sync with the release of LLM coding assistants. If you use LLMs like this you simply dont care about the project anymore and should move on with your life. Its better for everyone if it gets abandoned by the original dev and forked by ones that care. Sometimes you just gotta let go.

    • Pommes_für_dein_Balg@feddit.org
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      What I’m wondering is, why does Vim need new features in the core repo at all?
      It’s finished software at this point.
      The dev should just do security upgrades and let extensions developed by other people handle additional functionality.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    1 day ago

    I would like to mirror another commentor and mention that Shougo is Japanese and probably issuing Claude to communicate.

  • fdnomad@programming.dev
    link
    fedilink
    English
    arrow-up
    54
    ·
    2 days ago

    It’s such a monumental waste of LLMs to include these slop phrases.

    Employee 1 enters a prompt to send a slop mail that is so garbage it is unbearable to read using a brain.

    So employee 2 either summarizes the slop mail using an LLM too or skips obtaining the information entirely and just goes straight to answering by prompting the next slop mail.

    I wonder if that’s by design - to make interacting with slop so painful that human-to-human communication will not happen without a LLM in between anymore.

    • Mothra@mander.xyz
      link
      fedilink
      arrow-up
      22
      ·
      2 days ago

      I originally meant to leave a much shorter comment; apologies.

      I can’t code to save my life. However I find your observation interesting. The way I see it, AI, no matter where, is eroding human to human interactions. It becomes the middleman for everything.

      It’s really obvious with personal research. A couple years ago if you wanted to start say, growing tomatoes in your backyard, you would have searched people’s comments on a variety of media platforms, would have read a few books or blogs. You would have asked questions to a bunch of people with some experience, left a like or upvote on people posting photos of their tomatoes, you would have used your own judgement to discern what consisted good quality advice and what not.

      It would have taken you days. But all that interaction is very rewarding especially for those authoring comments, blogs, books, and photos of their experiences. Because nobody makes something just to be ignored.

      Now LLM does all that process for you. In a matter of seconds. And giving no feedback or interaction to anyone whose information was used. It’s depressing, but I’m intrigued to see how it plays out.

      • fdnomad@programming.dev
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I agree. Specifically for your example I think the transformation has been going on for a while with the aggressive monitization of internet content / the ad industry and the general downfall of google search. LLMs could to be the final nail in the coffin for nieche expertise on the broader internet.

        I too am curious to see how AI companies will try to overcome the lack of human generated content to train their models on.

      • tristan@tarte.nuage-libre.fr
        link
        fedilink
        Français
        arrow-up
        7
        ·
        1 day ago

        I had this reflection 3 years ago, and I think that’s where we’re headed.

        The internet is already un-useable for search without prompting an LLM to gather the info you need for you, and it’s getting worse every month.

  • hexagonwin@lemmy.today
    link
    fedilink
    arrow-up
    28
    ·
    2 days ago

    wtf. i really like vim. is everyone really using neovim instead and there’s no good dev maintaining vim now?

  • Brummbaer@pawb.social
    link
    fedilink
    arrow-up
    23
    ·
    2 days ago

    I wonder what Bram’s stance would have been on AI.

    Anyway, looks like it’s time to learn emacs.

    • [object Object]@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Use Doom Emacs, then it’s usual Vim bindings + the space bar for fancy commands. The difficult part would be Emacs Lisp for customization, but then again it’s way better than Vimscript.

    • jeffep@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      If you have a few days and feel like staying inside for a bit, check out the system crafters Emacs from scratch videos on yt (perhaps also elsewhere). They are awesome and get you started better than just downloading spacemacs or so, but take some time.

    • ea6927d8@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      The learning curve is a bit steep, but if you already figured out – and felt comfortable in – Vim, it shouldn’t be that hard.

      Some people suggest Doom Emacs or evil, but I enjoy learning ‘vanilla’ first, then going for some framework or customization layer afterwards, if I do it at all.

  • AVengefulAxolotl@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    3
    ·
    2 days ago

    Having an AI understand your codebase, and potentially answering an issue, which might not be an issue is great I think.

    The problem I see here is that you have no idea that a bot is answering. Why isnt there a ‘shougo-bot’ / ‘vim-helper-bot’ / whatever named bot user for it?

    “Talking” to an AI should always be disclosed, everyone feels betrayed whenever they find out that a clanker is on the other side of the channel.

    • riccardo@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      2 days ago

      I don’t think those comments are generated and posted automatically by a bot plugged to their github repo. I think they are generated by the author using an LLM and copy-pasted there - or if the account is plugged to some LLM, they are at least manually reviewed. The answer to the replied-to comment are posted from 10 minutes to some hours later. I don’t think they lost their mind to the point of giving unvetted access to their reputable account to an AI that simply posts for them. That said, they could al least strip the obvious/uneasy parts that give very LLM vibes, specifically those quoted in the op

  • badbytes@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 day ago

    IMHO, the logo shouldn’t have the anti-AI symbol. I like the quill. Maybe a more positive DNA symbol.

  • mrmaplebar@fedia.io
    link
    fedilink
    arrow-up
    19
    arrow-down
    3
    ·
    2 days ago

    I’m probably more surprised than I should be that so many programmers are so pathetically lonely and delusional.

  • HuntressHimbo@lemmy.zip
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 days ago

    Well that’s a first. First time I’ve ever recognized a github name I’ve pulled from before in a drama article. Used Dein in my vim config a while back. RIP

    Edit: rearranged added rip