That’s the second time in three days that I’ve seen an article where “AI” (machine learning) was actually useful. It’s a hype machine and it’s overvalued, but it’s nice to see it being useful. I still can’t wait for OpenAI to fail. I run the Llama model locally because to hell with giving corps more of my data. Anyway…
I only eff around with it occasionally. I run it on a MacBook Pro M1 Max. It’s solid for performance. I don’t have a job where I can employ it regularly, so after initial testing, I barely use it.
Fair, I’m kinda wondering about having a general local household ai, I’ve got no good reason for it other than general tinkering. I’m somewhat waiting for the crossover between decent ai and affordable hardware to occur.
I’ve been running Gemma3 4b locally on ollama and it’s useful. I’m thinking about applications where a multimodal model could receive video or sensor feeds (like a security can, say).
That’s the second time in three days that I’ve seen an article where “AI” (machine learning) was actually useful. It’s a hype machine and it’s overvalued, but it’s nice to see it being useful. I still can’t wait for OpenAI to fail. I run the Llama model locally because to hell with giving corps more of my data. Anyway…
Out of curiosity, what’s your use case and spec of the machine running it?
I only eff around with it occasionally. I run it on a MacBook Pro M1 Max. It’s solid for performance. I don’t have a job where I can employ it regularly, so after initial testing, I barely use it.
Fair, I’m kinda wondering about having a general local household ai, I’ve got no good reason for it other than general tinkering. I’m somewhat waiting for the crossover between decent ai and affordable hardware to occur.
I’ve been running Gemma3 4b locally on ollama and it’s useful. I’m thinking about applications where a multimodal model could receive video or sensor feeds (like a security can, say).