• 1 Post
  • 3.95K Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle







  • Voroxpete@sh.itjust.worksto196@lemmy.blahaj.zoneMTG Mangione rule
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    12 hours ago

    Yeah, I mean, obviously a board wipe that evades your own stuff is better, but if all you’ve done for the first four turns is set up this combo, you don’t even have a board state to protect. And a wrath can’t be shocked off the board before it can fire.

    This is exactly why it’s just not actually all that busted. There are simply better ways to get the effect it gives you.


  • As I’ve said elsewhere here, I really don’t have a problem with people holding a moral stance against the use of genAI. It’s fine to just say “However useful this might be, I don’t want to see it used because I think it has too many ethical costs/consequences.” But blanket accusing all work that involved genAI in any capacity of being “slop” isn’t holding a moral stance, it’s demanding that reality conform to your beliefs; “I hate this, therefore it must be terrible in every respect.”

    If you truly hold a well founded ethical stance against the use of genAI, that stance shouldn’t be threatened by people doing good and effective work with genAI, because it’s effectiveness should have nothing to do with your objections.


  • Yes and no. It’s a two card combo, and the mana costs are all sequential, so in theory you can do it perfectly on curve. On the other hand, its not really worth doing in singleton formats given that it relies on two very specific cards for a payoff that’s… fine. Like, pretty good, but not worth two cards for something that needs a perfect draw to work.




  • It’s really not. As a 1/1 with no other abilities it does nothing to improve your board state. It’s only value is being repeatable removal, and it can be pinged off the board for nothing. You also have to wait a turn to fire it, which gives your opponent(s) a whole turn to deal with it.

    It’s a good card, arguably a seriously underappreciated one, but it’s telling that it’s a rare that goes for $2 and has an inclusion rate of just 0.32% on EDHREC.

    It does combo hilariously with Thornbite Staff though.



  • The thing is, you’re conflating ethical and practical concerns here. The commenter you’re responding to is clearly talking about the practical aspects of using AI tools.

    If you have a fundamental moral issue with AI that is entirely independent of how efficacious it is, that’s fine. That’s a completely reasonable position to hold. But don’t fall into the trap of wanting every use of genAI to be impractical because it aligns with your morality to feel that way.

    If this is an ethical stance that you truly hold, you should be willing to believe that using these tools is bad even when they’re effective. But a lot of people instead have to insist that every use of AI is impractical, in the face of any evidence to the contrary, because they’ve talked themselves into believing that on some fundamental level. Like “If AI is ever useful, that means I’m wrong about it being immoral.”


  • But that kind of proves their point, right?

    Yes, a lot of projects have had issues with contributers who push unreviewed AI slop that they don’t understand, ultimately creating more work for the project. Or with avalanches of AI code review bug reports that do nothing to help. But that’s not what’s happening here.

    In this case, the main developer of the project is choosing to use AI, on their own terms, because they find it helpful, and people are giving them shit for it. It’s their project and they feel this technology is beneficial. Isn’t that their call to make? Why are people treating the former and the latter as completely interchangeable scenarios when they’re clearly not? It kind of does suggest that people are coming at this from a more ideological rather than rational perspective.


  • Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.


  • Cory Doctorow explores this in his most recent column, describing the concept of “Centaurs” and “Reverse Centaurs”.

    A centaur is a human piloting a machine body. They’re faster and stronger because of the machine, but the human is in charge. A reverse centaur is a machine piloting a human body; a brainless head on a largely inferior body.

    In the context of generative AI, centaurs are people use AI tools carefully, intentionally, and of their own volition. They have the knowledge necessary to assess when the output of the machine tools is good or bad, and the machine simply becomes, like any other tool, a way of leveraging their abilities more efficiently.

    A reverse centaur is when you have a “human in the loop.” An intern told to write a stack of columns that would take ten experienced writers a week, in only a few days, but don’t worry you can just use ChatGPT it’ll be so fast. That person really only exists for two purposes; to push the buttons that make the machine go, and, far more importantly, to eat the blame when the machine fucks up. They were the “human in the loop” so they were supposed to catch the bad output, but they were never given the time or the expertise to do so, and they were placed in a scenario where using genAI was the only possible choice to get the outcome that was demanded of them.

    I don’t see the use of AI tools, especially in areas that they are well suited to like coding, as automatically befitting the “AI slop” descriptor. Gen AI can be extremely effective as a coding assistant, when used with care, and by someone with enough knowledge to read the output and understand it completely. As you say, a huge amount of normal everyday coding has, for decades, been copy and pasting code blocks because why the fuck would repeat work that someone else has already done??? And for decades bad coders have screwed themselves over by copy-pasting code they don’t understand or didn’t bother to properly read and parse. That’s nothing new.

    Now, it’s also completely reasonable for people to hold ethical objections to genAI that are entirely separate from any practical concerns. If someone’s position is “I do not care how good the output is, because I believe it comes from a fundamentally immoral technology”, I think that’s a completely cogent moral stance. I have no argument against that. I’d just ask to not use the term “AI slop” when describing that objection, because I think it really muddies the waters and makes it extremely unclear what you’re actually objecting to. If your problem is one of ethics, say that. Don’t just re-use a term you heard elsewhere that’s tangentially related.



  • No, that stat is nothing outlandish. A little outdated - I think it might be from a study in 1982 - but not jarringly at odds with other data I’ve seen. The term “illiterate” means different things in different contexts… and unfortunately often means different things even in the same context. Generally though, for statistical data, “illiterate” tends to mean “functionally illiterate.” People who are functionally illiterate are perfectly capable of reading and writing in that language at a children’s level, but they struggle with complex terminology and long (especially technical) passages of text. Think about the sort of person who could read a warning sign, but not a manual. The kind of person Ikea assembly instructions were made for. If anything, 13% starts to sound low when you realise that’s the usual standard.