• 35 Posts
  • 2.94K Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle







  • This is so stupid.

    To me, “AI” in a car would be like highlighting pedestrians in a HUD, or alerting you if an unknown person messes with the car, or maybe adjusting mood lighting based on context. Or safety features.

    …Not a chatbot.

    I’m more “pro” (locally hostable, task specific) machine learning than like 99% of Lemmy, but I find the corporate obsession with cloud instruct textbots bizarre. It would be like every food corp living and breathing succulents. Cacti are neat, but they don’t need to be strapped to every chip bag, every takeout, every pack of forks.


  • I feel like there’s a “bell curve” for Linux gaming enjoyment.

    If you’re even a little techy, like not using your PC begrudgingly and mostly live in iOS or whatever, the switch will feel like a relief. But many PC users aren’t; they arent interested in what a OS or file system is, they just want League or Sims to pop up and that’s it.

    …And then there’s me. I use Linux for hours every day, I’m pretty familiar with the graphics stacks and such… But I need the performance of stripped, neutered Windows I dual boot for weird, modded sim games I sometimes play. And frankly, it’s more convenient for many titles I need to get up and running quick for coop or whatever. There’s also tools like SpecialK that don’t work on Linux, and help immensely with certain games/displays.


  • Not everyone’s a big kb/mouse fan. My sister refuses to use one on the HTPC.

    Hence I think that was its non-insignificant niche; couch usage. Portable keyboards are really awkward and clunky on laps, and the steam controller is way better and more ergonomic than an integrated trackpad.

    Personally I think it was a smart business decision, because of this:

    It doesnt have 2 joysticks so I just buy an Xbox one instead.

    No one’s going to buy a steam-branded Xbox controller, but making it different does. And I think what killed it is that it wasn’t plug-and-play enough, eg it didn’t work out of the box with many games.







  • Hmm, an aging population and the job/company market are going to exacerbate this even more. How do we fix it:

    • Democrats: Let’s keep doing exactly what we’ve been doing since the 60s! With leaders from the 60s! Strangles Madami in the corner.

    • Republicans: Let’s cut immigration (the only thing making our population skew young), make healthcare even more private, tax/inflame all our manufacturing suppliers, and make life even more miserable for child bearing women! Oh, and blow up the national debt so none of this can be fixed later. Now buy our crypto, YOLO!

    What about, like, enticing immigrants to balloon manufacturing with education grants and entrepreneur seeding, and, you know, making it not a nightmare to be naturalize, leveraging a huge advantage countries like China don’t have… Or tax the snot out of the rich and tighten government spending, modernize the military to cheaper drone based warfare, regulate healthcare to be less for-profit, all over reasonable timeframes?

    …Nah. Let’s specifically not do that.



  • A lot, but less than you’d think! Basically a RTX 3090/threadripper system with a lot of RAM (192GB?)

    With this framework, specifically: https://github.com/ikawrakow/ik_llama.cpp?tab=readme-ov-file

    The “dense” part of the model can stay on the GPU while the experts can be offloaded to the CPU, and the whole thing can be quantized to ~3 bits average, instead of 8 bits like the full model.


    That’s just a hack for personal use, though. The intended way to run it is on a couple of H100 boxes, and to serve it to many, many, many users at once. LLMs run more efficiently when they serve in parallel. Eg generating tokens for 4 users isn’t much slower than generating them for 2, and Deepseek explicitly architected it to be really fast at scale. It is “lightweight” in a sense.


    …But if you have a “sane” system, it’s indeed a bit large. The best I can run on my 24GB vram system are 32B - 49B dense models (like Qwen 3 or nemotron), or 70B mixture of experts (like the new Hunyuan 70B).


  • DeepSeek, now that is a filtered LLM.

    The web version has a strict filter that cuts it off. Not sure about API access, but raw Deepseek 671B is actually pretty open. Especially with the right prompting.

    There are also finetunes that specifically remove China-specific refusals. Note that Microsoft actually added saftey training to “improve its risk profile”:

    https://huggingface.co/microsoft/MAI-DS-R1

    https://huggingface.co/perplexity-ai/r1-1776

    That’s the virtue of being an open weights LLM. Over filtering is not a problem, one can tweak it to do whatever you want.


    Grok losing the guardrails means it will be distilled internet speech deprived of decency and empathy.

    Instruct LLMs aren’t trained on raw data.

    It wouldn’t be talking like this if it was just trained on randomized, augmented conversations, or even mostly Twitter data. They cherry picked “anti woke” data to do this real quick, and the result effectively drove the model crazy. It has all the signatures of a bad finetune: specific overused phrases, common obsessions, going off-topic, and so on.