You must log in or register to comment.
Just the scale of hardware needed to run LLMs is really prohibitive of FOSS. Even things like deepseek are not really in the real of realistic self hosting. We need some hardware advancement or for the bubble to pop and hardware prices to come down before that.
I do agree with the general premise. Total AI rejection is luddite bullshit.
Smaller models have been getting a lot more capable of late. A 32 billion param model can do quite a bit today, and there are a lot of things that can still be optimized going forward. If you look at capability improvements, it’s absolutely stunning, something you can run on a laptop today would’ve needed a whole data centre just a couple of years ago.


