I have an early 2000s PC (pre-SATA) with 512MB RAM (I’d love to tell you about the CPU, but its under a cooler that isn’t going anywhere) that’s been sitting in closets for about 15 years. Assuming I’m willing to buy into it, can something like that reasonably host the following simultaneously on a 40GB boot drive:

Nextcloud Actual Photoprism KitchenOwl SearXNG Katvia Paperless-ngx

Or should I just get new hardware? Regardless, I’d like to do something with this trusty ol business server.

Edit: Lenovo or Dell as the most cost-effective, reliable self-host server in your opinion?

  • @Shdwdrgn@mander.xyz
    link
    fedilink
    English
    -11 year ago

    It’s a great machine to learn on. Build yourself a web server or something like that. You don’t know what it can do until you push it, and you’re not out anything by taking it to its limits. If it has something like a Core 2 Duo you could even run KVM and launch a virtual machine to learn about that process. Old hardware is meant to be run into the ground and you’ll learn a lot in the process, including getting a feel for how much hardware you really need to perform the tasks you want. (I literally just retired a rack server this year with a Core 2 Duo and 8GB of memory, which was used to run five VM servers providing internet services.)

    • @LazerDickMcCheese@sh.itjust.worksOP
      link
      fedilink
      English
      01 year ago

      Can you give me some case use examples for VMs like that? My VM knowledge stops at emulating OSs for software compatibility and running old Windows versions for gaming.

      • @Shdwdrgn@mander.xyz
        link
        fedilink
        English
        21 year ago

        What I’ve always done is to create a VM for each service I run – so like one each for DNS, apache, postfix, dovecot, and even one to handle ssh and ftp logins. I’ll also set up a VM when I want to test a new service, so I don’t trash out a physical machine. This makes it easy to make extra copies if I want to run redundant systems or just move them to a different physical server. I suppose this is something like what docker does, except these are entirely self-contained systems that don’t even need to be running the same OS, and if someone happens to hack into one system, it doesn’t give them access to all the others. I also have a physical machine set up as a firewall to direct the appropriate ports to each VM and handle load balancing, but for your experiments you could do this task on the physical desktop and point everything to the VMs running inside it.

        One nice thing about KVM is that you can overload your memory space. So like if you only have 512MB available and you set up three VMs with 256MB each, the actual free space will be shared among them because usually a system doesn’t take up ALL of its memory (although for linux you might need to limit how much cache ram each system will try to use). In reality what you find is that a system might run a task or get a burst of traffic that uses more memory, so it will pull free physical memory from the other VMs as needed, then give it back when the task is done. You won’t really want to run web-facing servers with such a tight space though, unless you are the only person actually using them, but hopefully it gives you some ideas of how you can play around with what you have available in that machine.