I’d like to contribute to the Lemmy community. I’ve been running my own private Linux servers for more than 25 years for things like email (years ago before all the spam), and as file servers, backup, etc. It’s an old, not very powerful computer, running Ubuntu server, in a corner in my house. Is it worth running a Lemmy instance on such a machine? I suppose there’d also be issues of how much data is going in and out, and how that would impact my internet cable usage. Thoughts?
If you want to use it just for yourself (thus removing the load of your own usage from the other public instances), sure.
If you intend to open it up for many users, you need to consider whether you’re capable of managing that kind of load. Lemmy is relatively lightweight though.
On my post here, you can find some informations about other people running an instance in the comments.
Excellent post, thanks. It looks like one of the bigger issues is with images, since they’re significantly larger than text. I wonder if it’s possible to disable image uploads, and just require links.
Wouldn’t the federation syncing process use much more resources across the network than 1 user browsing sometimes?
Maybe I’m wrong, but you would be fetching the content from all the other instances. The other instances would only need to fetch your content if someone searches for the communities you created. So, if you mostly post and comment on existing communities, the other instances wouldn’t have to do any extra work.
Lemmy is very lightweight due to being written in Rust. I think you won’t have as much problems with the CPU as with the networking and bandwidth. It’s hard to host a reliable site from your home, usually requiring dynamic DNS, port forwarding, and very strong firewalling. The bandwidth usage is probably minimal if you decide to host images on separate servers like imgur, but still could be significant.
Maybe I’d limit the number of people to around five to see what the usage was like. It’s sounding like a fun weekend project now.
The problem with scale is going to be the database rather than the language / framework it’s written in.
I’m running my instance on 2 cores, 2GB of RAM. Of course I’m the only one on there at the moment, but it’s running great, and I think it might even be fine with a single core.
As other have said, if you are planning to use it for your own “user’s home instance,” that should be fine. I’ve read a few people are running their instances on Raspberry PIs, which is pretty neat. While I have one I could use, I opted to setup a new droplet in DigitalOcean instead (I also run my own servers like you). A 2 core / 2GB RAM / 50GB SSD disk droplet on DigitalOcean is about $18 (USD) a month, while a single core droplet is about $12 (USD) per month.
If you plan to run an instance for others to use, be aware the federation is going to be chatty on your home network, and could impact other devices on your network. Probably not ideal, which is why I opted for a droplet in DigitalOcean instead.
It did cross my mind if I could have one of my raspberry pis run it. Actually, if it is possible I’d do on an Odroid N2+. Hmm…
Any recommended guides? I consider myself pretty savvy with tech as a software engineer, but I’d really like some sort of docker image to just spin up on my unraid server. I’m pretty lazy playing the whole sys admin role…
How much headroom do you have left on that? I’m considering starting up a public instance and would love to get an estimate for per-user workload on a federated instance.
With just me on the system, CPU is barely ever over 2 -3%. Load average looks good. Memory usage looks fine. You know what? Let me post some graphs for the past 24 hours, which, I’ve pretty much been on here for 24 hours straight. Again, I’m the only user on my instance, and this is all running in docker containers.
I’ve mentioned this in a few other threads, but I’m tempted to fire up jmeter and push some load through my instance just to see how it behaves if I slam the system via the API. I just don’t feel like learning the internal API endpoints and all that right at this time though.
Super cool, thanks
Awesome, this is super helpful! I’d be using a very similar setup. It might be best to start small, invite a couple people on, and see how that memory scales. I’ll be avoiding any auto-scaling unless it becomes a much bigger project.
Well, ideally each service would have their own dedicated resources to begin with. But, given all of the lemmy services + Postgres are running on 2 cores with 2GB of RAM, that’s pretty impressive.
Anyway, autoscaling doesn’t necessarily solve scaling issues without a lot of thought and planning. It’s not always as simple as throwing more hardware at the problem, as I’m sure you already know.
I think if you want to bring value, start contributing in the tech support communities also. Lots of technical questions, specially about email and hosting Lemmy in ways that are more complicated than having it’s own host.
Your skills would be very valuable there. :)
im running my own as an instance for myself and those interested in AI research. IMO a small instance is the way to make sure your “day-to-day” if you are community building does not spam the network unnecessarily. With Ansible it did not take long to fire up.
just a handful of users
I’ve understood Lemmy is pretty light on the system, at least until the number of users starts to ramp up - from that end, you’re good to go! I personally went with a virtual server on Hetzner because I didn’t want to let this kind of data into my private network.