• 10 Posts
  • 2.58K Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle

  • Meh. They’re dealing with two orthogonal problems here:

    1. There’s a right amount of documentation. It’s not zero, but it’s also not “every thought you’ve ever had”. The more your documentation can be generated or validated automatically, the more you can reasonably sustain.
    2. It’s expensive to be wrong. You can deal with that by either doing pre-work in order to reduce the chance you’ll be wrong (which increases the cost of later finding out you were wrong), or sprinting towards a deliverable in order to minimize the cost of being wrong (which increases the chance that deliverable will be wrong).

    You’re probably better off looking at each of those problems independently first and deciding where on the spectrum your team would thrive. RFCs might hit the sweet spot for both. But if you don’t ask the deeper question, you might just make things worse.


  • I mean, the whole history of banking is: “Bankers think of a way to do reckless things that are wildly profitable (in the short term) and catastrophic (in the long term). They offer bribes and other corrupt incentives to their watchdogs to let them violate the rules, which leads to utter disaster.” From the 19th century “panics” to the crash of '29 to the S&L collapse to the 2008 Great Financial Crisis and beyond, this just keeps happening.

    Much of the time, the bankers involved have some tissue-thin explanation for why what they’re doing isn’t really a violation of the rules. Think of the lenders who, in the runup to the Great Financial Crisis, insisted that they weren’t engaged in risky lending because they had a fancy equation that proved that the mortgage-backed securities they were issuing were all sound, and it was literally impossible that they’d all default at once.

    The fact that regulators were bamboozled by this is enraging. In hindsight (and for many of us at least, at the time), it’s obvious that the bankers went to their watchdogs and said, “We’d like to break the law,” and the watchdogs said, “Sure, but would you mind coming up with some excuse that I can repeat later when someone asks me why I let you do this crime?”

    https://pluralistic.net/2025/12/13/uncle-sucker/






  • You can run Linux on ARM. I do. And let’s not act like x86 wasn’t full of Microsoft-led efforts to undermine Linux. Anyone who’s had to disembowel their BIOS settings to the tune of “Your PC will be unsafe! Are you sure you want to run a LEGACY OS???” is familiar.

    I’m not a huge fan of the idea of buying CPU+GPU+RAM+mobo all as one unit. But like… that’s what tends to happen. Audio cards, SATA drives, network cards, these things all used to be separated until motherboards offered features to streamline things.

    The real problem is not form factor, but lack of competition. If there were 10-15 Qualcomms out there, offering different combos and a la carte options, there’d be no problem. It’s only because there are a tiny number of dominant players in the space that technical consolidation automatically translates to abusing consumers.


  • Well… modularity is kinda coming to an end anyway, regardless of supply chain moves. Apple’s M series has shown that op decoders and unified memory are the low-hanging fruit for overall system performance improvements, and that means less modularity.

    I think Valve sees the writing on the wall and is trying to get ahead of the game via FEX and the Steam Frame. Intel and AMD are pretty much stuck playing Nvidia’s game at this point, and Qualcomm has an incredible opportunity here. I’m still rooting for RISC-V, and I think it may end up being the long-term winner in like 10-15 years time.

    But either way, x86-style modularity is not long for this world. From a purely technical standpoint, I think that’s good. Adding the political and economic situation into the mix… well… fuck, we’re mega-fucked. About the only thing we have going for us as consumers is the fact that this is already headed towards a reset. So if we do gain some leverage, we can make a big change all at once. If we don’t though… things will get much worse.



  • Yeah, we need to be careful about distinguishing policy objectives from policy language.

    “Hold megacorps responsible for harmful algorithms” is a good policy objective.

    How we hold them responsible is an open question. Legal recourse is just one option. And it’s an option that risks collateral damage.

    But why are they able to profit from harmful products in the first place? Lack of meaningful competition.

    It really all comes back to the enshittification thesis. Unless we force these firms to open themselves up to competition, they have no reason to stop abusing their customers.

    “We’ll get sued” gives them a reason. “They’ll switch to a competitor’s service” also gives them a reason, and one they’re more likely to respect — if they see it as a real possibility.




  • It’s pretty apt, honestly. It’s just the next step of the climate-denial and cancer-denial playbooks.

    We know that the tech bosses are aware of how harmful their stuff is. We know that they hire experts specifically to make their stuff as addictive as possible. We know they bribe the hell out of politicians to avoid getting regulated. We know they cook their books and launder money like crazy. We know their financial models are predicated on getting everyone to use an ever-increasing dose of their stuff. We know that people suffer horrific conditions to help build their devices and moderate their content cesspools.

    It may seem crass to compare tech bosses to narco kingpins. But that’s because their methods are crass. They want to seem sophisticated and unique. But they’re not.