• 0 Posts
  • 29 Comments
Joined 2 years ago
cake
Cake day: July 27th, 2023

help-circle
  • LittleBobbyTablestoSelfhosted@lemmy.worldWhat is Docker?
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 days ago

    I’m not sure how familiar you are with computers in general, but I think the best way to explain Docker is to explain the problem it’s looking to solve. I’ll try and keep it simple.

    Imagine you have a computer program. It could be any program; the details aren’t important. What is important, though, is that the program runs perfectly fine on your computer, but constantly errors or crashes on your friend’s computer.

    Reproducibility is really important in computing, especially if you’re the one actually programming the software. You have to be certain that your software is stable enough for other people to run without issues.

    Docker helps massively simplify this dilemma by running the program inside a ‘container’, which is basically a way to run the same exact program, with the same exact operating system and ‘system components’ installed (if you’re more tech savvy, this would be packages, libraries, dependencies, etc.), so that your program will be able to run on (best-case scenario) as many different computers as possible. You wouldn’t have to worry about if your friend forgot to install some specific system component to get the program running, because Docker handles it for you. There is nuance here of course, like CPU architecture, but for the most part, Docker solves this ‘reproducibility’ problem.

    Docker is also nice when it comes to simply compiling the software in addition to running it. You might have a program that requires 30 different steps to compile, and messing up even one step means that the program won’t compile. And then you’d run into the same exact problem where it compiles on your machine, but not your friend’s. Docker can also help solve this problem. Not only can it dumb down a 30-step process into 1 or 2 commands for your friend to run, but it makes compiling the code much less prone to failure. This is usually what the Dockerfile accomplishes, if you ever happen to see those out in the wild in all sorts of software.

    Also, since Docker puts things in ‘containers’, it also limits what resources that program can access on your machine (but this can be very useful). You can set it so that all the files it creates are saved inside the container and don’t affect your ‘host’ computer. Or maybe you only want to give permission to a few very specific files. Maybe you want to do something like share your computer’s timezone with a Docker container, or prevent your Docker containers from being directly exposed to the internet.

    There’s plenty of other things that make Docker useful, but I’d say those are the most important ones–reproducibility, ease of setup, containerization, and configurable permissions.

    One last thing–Docker is comparable to something like a virtual machine, but the reason why you’d want to use Docker over a virtual machine is much less resource overhead. A VM might require you to allocate gigabytes of memory, multiple CPU cores, even a GPU, but Docker is designed to be much more lightweight in comparison.


  • You say you’ve already read Librewolf’s FAQ, so I can skip over what they’ve provided in their response.

    The only possible downside I could see would be that your encrypted data is stored on Mozilla servers. Which isn’t a very major downside–it’s properly end-to-end-encrypted. This is mentioned both by Mozilla themselves, as well as in the Librewolf docs. This is the only downside I can see right now, but for the paranoid, it might be worth looking toward the future; who knows, maybe some day, Firefox will randomly decide to disable E2EE for Firefox sync. That could be a potential downside down the road. But I find that to be pretty unrealistic… I honestly can’t see a lot of ways for Mozilla to screw this up.

    If the prospect of relying on Mozilla servers still makes you uncomfortable, then you can self-host a sync server, but it’s not exactly a quick setup. They do provide a Docker method of installation, at least. The sync server code is found here, along with installation instructions for self-hosting and how to connect it to Firefox/Librewolf/other derivatives: https://github.com/mozilla-services/syncstorage-rs


  • The downside is that Waterfox is based on Firefox ESR (Extended Support Release) builds, rather than the main Firefox branch.

    ESR builds are actually less secure than regular Firefox because they receive security updates more slowly.

    How accurate is this, exactly? I was under the impression that Firefox ESR is akin to something like the LTS Linux kernel. That is to say, sure, it doesn’t receive fancy new features as soon as they release, but surely it still receives important security updates in a timely manner.



    • ALWAYS avoid partial upgrades, lest you end up bricking your system: https://wiki.archlinux.org/title/System_maintenance#Partial_upgrades_are_unsupported
    • The Arch Wiki is your best friend. You can also use it offline, take a look at wikiman: https://github.com/filiparag/wikiman
    • It doesn’t hurt to have the LTS kernel installed as a backup option (assuming you use the standard kernel as your chosen default) in case you update to a newer kernel version and a driver here or there breaks. It’s happened to me on Arch a few times. One of them completely borked my internet connection, the other one would freeze any game I played via WINE/Proton because I didn’t have resize BAR enabled in the BIOS. Sometimes switching to the LTS kernel can get around these temporary hiccups, at least until the maintainers fix those issues in the next kernel version.
    • The AUR is not vetted as much as the main package repositories, as it’s mostly community-made packages. Don’t install AUR packages you don’t 100% trust. Always check the PKGBUILD if you’re paranoid.

  • This is excellent news. This is one of the biggest features that I’ve wanted out of Firefox for years, and one of the reasons I’ve kept Chromium as a secondary browser all this time.

    I do remember seeing a community-made GitHub project that added a profile switcher to Firefox, which looked pretty good, but it also required installing an executable somewhere on the system, which I’m not exactly keen on.

    I think Zen Browser has a built-in profile switcher, but it also changes a bunch of core UI elements… I just want Firefox with a profile switcher, lol.


  • In regards to email aliasing services, addy.io is the only one I know of other than SimpleLogin, which is owned by Proton AG–so if you want to get away from Proton, SimpleLogin isn’t an option. Both of these services are recommended on privacyguides.org.

    Some email services allow you to use a domain you own, which theoretically should give you unlimited aliases to work with, but may not be as privacy-focused as the email address is only as anonymous as your registered domain.

    Personally, I prefer the ‘pseudonymous’ aliases that addy.io and Proton Pass give (it’s usually something like random.words123@passmail.net in the case of Proton).

    If anyone has good experiences with other aliasing services that provide this option, please let us know.




  • Yep, been self-hosting it locally for a while now. To put simply, I archive anything that is within my personal realm of interest that I believe has a chance to be deleted, and is important to keep a copy of. It could be troubleshooting tips for specific tech issues, things that may be under threat of takedown, or maybe just an article I like and want a local copy of. It’s a wonderful tool.


  • I get 8.44 bits (1 in 347.34 browsers). I use Firefox with Arkenfox user.js applied on top, with some of my own custom overrides.

    However, I think the biggest factor could be because I have Ublock Origin set to medium-hard mode (block 1st party scripts, 3rd party scripts and 3rd party iframes by default on all websites), so the lack of JavaScript heavily affects what non-whitelisted websites can track. I did whitelist 1st-party scripts on the main domain for this test (coveryourtracks.eff.org), but all the ‘tracker’ site redirects stay off the whitelist.

    I actually had to allow Ublock Origin to temporarily visit the tracker sites for the test to properly finish–otherwise it gives me a big warning that I’m about to visit a domain on the filter list.


  • LittleBobbyTablestoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    14
    ·
    6 months ago

    A friendly reminder to everyone to check out ArchiveBox if you’re looking for a self-hosted archiving solution. I’ve been using it for a while now and it works great; it can be a little rough around the edges at times, but I think it’s a wonderful tool. It’s allowed me to continue saving pages during the Internet Archive’s outage.








  • At this time, we feel our case for a defamation suit would be very strong; however, our deepest wish is to simply put all of this behind us.

    The passive-aggressive bragging… this comes off as nonprofessional to me, like “we could sue the pants off this person if we wanted to”. Why does the public even need to hear this part in particular? It sounds like something that should be privately communicated to the alleged defamer, not the public. It’s a little odd in my opinion…

    Regardless, I am interested in seeing the full report and I’ll keep a close eye on this.


  • I would try what the other commenter here said first. If that doesn’t fix your issue, I would try using the Forge version of WebUI (a fork of that WebUI with various memory optimizations, native extensions and other features): https://github.com/lllyasviel/stable-diffusion-webui-forge. This is what I personally use.

    I use a 6000-series GPU instead of a 7000-series one, so the setup may be slightly different for you, but I’ll walk you through what I did for my Arch setup.

    Me personally, I skipped that Wiki section on AMD GPUs entirely and it seems the WebUI still respects and utilizes my GPU just fine. Simply running the webui.sh file will do most of the heavy lifting for you (you can see in the webui.sh file that it uses specific configurations and ROCm versions for different AMD GPU series like Navi 2 and 3)

    1. Git clone that repo, git clone https://github.com/lllyasviel/stable-diffusion-webui-forge stable-diffusion-webui (the stable-diffusion-webui directory name is important, webui.sh’s script seems to reference that directory name specifically)
    2. From my experience it seems webui.sh and webui-user.sh are in the wrong spot, make symlinks to them so the symlinks are at the same level as the stable-diffusion-webui directory you created: ln stable-diffusion-webui/webui.sh webui.sh (ditto for webui-user.sh)
    3. Edit the webui-user.sh file. You don’t really have to change much in here, but I would recommend export COMMANDLINE_ARGS="--theme dark" if you want to save your eyes from burning.
    4. Here’s where things get a bit tricky: You will have to install Python 3.10, there is warnings that newer versions of Python will not work. I tried running the script with Python 3.12 and it failed trying to grab specific pip dependencies. I use the AUR for this; use yay -S python310 or paru -S python310 or whatever method you use to install packages from the AUR. Once you do that, edit webui-user.sh so that python_cmd looks like this: python_cmd="python3.10"
    5. Run the webui.sh file: chmod u+x webui.sh, then ./webui.sh
    6. Setup will take a while, it has to download and install all dependencies (including a model checkpoint, which is multiple gigabytes in size). If you notice it errors out at some points, try deleting the entire venv directory from within the stable-diffusion-webui directory and running the script again. This actually worked in my case, not really sure what went wrong…
    7. After a while, the webUI will launch. If it doesn’t automatically open your browser, then you can check the console for the URL, it’s usually http://127.0.0.1:7860. Select the proper checkpoint in the top left, write down a test prompt and hopefully it should be pretty speedy, considering your GPU.