

Barring any quirks; for Arch, RHEL, Rocky, Alma, CentOS, Debian, Fedora, Mandrivia, openSUSE, Ubuntu, and Void it’s as simple as installing nvidia-open
. Most other distros its the same, but the package name varies from repository to repository.
Barring any quirks; for Arch, RHEL, Rocky, Alma, CentOS, Debian, Fedora, Mandrivia, openSUSE, Ubuntu, and Void it’s as simple as installing nvidia-open
. Most other distros its the same, but the package name varies from repository to repository.
I’m torn between this being fucking genius, and a terrible idea all at once.
EDIT: Requires ngx_http_auth_request_module
. * Caddy4lyfe. *
There are two routes. VPN and VPS.
VPN; setup wireguard and offer services to your wireguard network.
VPS; setup a VPS to act as a reverse proxy for your jellyfin instance.
Each have their own perks. Each have their own caveats.
Over WiFi? Pass around physical media. Nothing ruins a LAN party like someone saturating 90% of the connection to transfer ISOs.
If it’s a dedicated file server, with its own network, then the obvious choice is Samba.
It really doesn’t matter how long your media is, it matters the specific conditions you’re changing. Encoding takes time, and it’s outrageously stressful on a CPU. It’s still going to take a long time versus using a GPU.
Debian wins
Testify, brother.
Slackware 3.1 late 1996. Great fuckin’ year that was.
Which is literally why I shit on them, and then you defended them. So which is it? You defending them or shitting on them? Because I’ve never not had an issue with DHL.
These people seem…pretty stupid tbh. Maybe they don’t understand what fail2ban is, or what it does, but you should absolutely use fail2ban. Security is objectively better by just having it enabled than not for any service, not just jellyfin.
Excellent setup. It’s the one I use as well.
I wouldn’t setup fail2ban in a container. Install it on the host system.
I would like the transcoding to be done on the server side
Unless your server has access to a GPU, and uses WebGL to be able to utilize that GPU via web tech, I don’t recommend doing this at all. Gonna take a dozen hours to encode via CPU…
You got it. Seems like a few people disagree with what I said, but for the vast majority of cases what I’ve said is objectively true. I’m sure you can find an instance or two where it’s not, so take it with a grain of salt.
That’s capitalism, baby! /s
Def agree.
Brain dead comment.
It really doesn’t matter. It tells you the difference between the CDNs right on the usage page.
I’ve never had an issue with anything Steel Series and linux.
I guess it depends on how you got caddy to begin with. If you used xcaddy, you have to update caddy the same way (recompile via xcaddy
) otherwise you’ll get the default binary which has no misc modules by default, which kinda sounds like what’s happened but who knows for sure.
If you’re feeling daring, you can try to compile caddy
yourself with xcaddy
, it’s super easy.
Save your Caddyfile
’s (ultra important), and uninstall caddy
. Install xcaddy
(apt install xcaddy [or go install github.com/caddyserver/xcaddy/cmd/xcaddy
]). Then use xcaddy
to compile caddy
with the modules you need;
$ cd /tmp
$ xcaddy build --with github.com/caddy-dns/porkbun --with github.com/caddy-dns/cloudflare --with github.com/some-user/whatever-module
Caddy will build and be spit out in /tmp/caddy
. Move it to /home/username/.local/bin
or something, and make sure that directory is in your path. Don’t forget to chmod +x caddy
.
Run caddy like normal and see if this fixes your issue. If not, you’ll likely have to try and older version of caddy (uninstall and specifically install the previous version or if you can’t, use xcaddy
with CADDY_VERSION
to build a specific version with your modules), or wait until they push a fix for whatever they broke.
This is just outrageously poor advice.