Join fediverse they said, it’ll be fun, what’s the worst that could happen
Join fediverse they said, it’ll be fun, what’s the worst that could happen
Have you heard Hermanos Gutiérrez
You put a few GPTs in a trenchcoat and they’re obviously AI. I can’t speak about openAIs offerings since I won’t use it as a cloud service, but local deepseek I’ve tried is certainly AI. People are moving the goalposts constantly, with what seems to me a determination to avoid seeing the future that’s already here. Download deepseek-v2-coder 16b if you have 16GB of ram and 10gb of storage space and see for yourselves, it’s ridiculously low requirements for what it can do, it uses 50% of four cpu cores for like 15 seconds to solve a problem with detailed reasoning steps.
OK, it just spits predicted tokens, but in answer to what you asked and sensitive to the context you provided and its predictions are arranged such that when you decode them into language they present evidence or arguments used in thinking or argumentation. It also forms conclusions, inferences and produces results to problems, if you allow me to recycle from a dictionary definition of “reasoning”. It’s not perfect and obviously you can’t cram a huge amount into a 16b distillation and it certainly can get things wrong, but you have to squint to not see reasoning when you ask it to guesstimate something or solve a mathematical problem. It is an LLM but there’s reasoning coming out?
I use a 16b reduction of deepseek-r1 on my pc at home and it’s definitely not total bullshit. It’s 10gb of local model that can solve mathematics and physics problems for you or program in python or bash. It doesn’t hallucinate (or I haven’t been able to elicit it), it’s aware of the extents of its knowledge. It works incredibly fast on an old ryzen 1600 with 6600xt. Having an open source reasoning AI that takes 10gb of SSD and about 13 gb of ram is so weird that the only thing weirder is seeing smart people dismiss it as bullshit out of hand.
That’s what he meant by we’ll use sticks on the other side
Alba: A Wildlife Adventure
It needs fresh dick hashes for up to date dick recognition
Imagine pasting this LLM bullshit unabashedly as if it’s something people should sagely nod through recognizing how necessary it is to turn this poor man’s reddit into NSA internal messaging forum. “Better opsec around instance owners”, did you even read that before pasting? Who are you writing that for, instance owners’ handlers?
I’ve never been on 4Chan, but I’ve heard stories of who 4Chan users are, and what their posts are.
If Margaret Mead at her age smoked grass
I made the mistake of reading the books first and now I’m really bothered by how stupid all the changes are. I managed through season 2, barely (in my defense the really stupid comes at the end) and I’m clicking no magnets about the next one.
The story’s pivot to AI is the stupidest shit ever. One of those things that no one needed to spell is that there are no secret tunnels or doors that haven’t been explored by idle teenagers, yet there’s a magic tunnel at the bottom accessible by a ladder, with a higher-grade whirrr doors and more pleasant lighting where magical computers know people by name?
Even in the first season, but especially after Solo and kids in Silo 17 are found, it’s INCREDIBLY GRATING how clean and well groomed everyone is. Solo has a combed and perfectly trimmed beard and is never unshampooed. The fucking vault, where the book explicitly notes decades of shit in mounds looks like some fucking Dr. Who living room. The water at the bottom is crystal clear. It all looks like a tv set from the 70s.
The Quinn code is stupid and it would be much better if we saw less of how they’re solving it. The additional layer of eavesdropping (but it’s listening by AI, whoa) that prevents judge Meadows from disclosing it to anyone is stupid. The character of Camille is stupid, the motivation is horribly half-baked.
I’d recommend reading Howey’s trilogy, it’s good SF, and skipping the series alltogether. They certainly won’t tempt me to watch the third one.
You’re hilarious, if you weren’t shitting on people being bombed I’d never block you.
I use 14b and it’s certainly great for my modest highschool physics and python (to help the kids) needs, but for party games and such it’s a drag its pop culture stops at mid 2023
It’s actually hilarious when you zoom their test images and try to see how much more clothes the emperor has with them newfangled ai clothes.
Run it using ollama in a terminal (like ollama run model_name), ask it a question.
Jeeez, just copy the ollama’s directory (something like .ollama) from user’s dir to wherever. You can check and find the files inside. I find the published 14b really useful, it’s ten GB that think and reason in english.
Two was horrible, the end boss skeleton is the stupidest shit. I liked the first, endured the second to the end and never touched the third or Andromeda
Now do Vučić
Summit, but it’s good enough that I haven’t tried many others.