Title. Just wondering if I did something bad/terrible with it. Link is @ title. Check the image tag @ its repo to see how it was built. And before someone asks… the Docker lemmy community is really dead so I had to resort to you guys. Sorry, I guess.
And thanks in advance.
From a comment of yours;
Eh…just trying to learn some new things regarding common “dockerization”-related things, and improving its security.
If the end-goal is not learning but having an as secure container as possible, then consider Wolfi; this is a good read. If you’re interested to know its current vulnerabilities, so that you can work on resolving those; then consider Trivy as it is -to my knowledge- the industry-standard for this specific use-case.
If the end-goal is not learning but having an as secure container as possible
It’s actually both – there is always something new to learn, after all. And thanks for these tips, I’ll read em right now.
Am I understanding correctly that you are building the image by copying in key elements from the host machine’s functioning nginx installation?
This is creative but not common approach to docker.
Normally software is installed following the officially documented procedure (imagine installing using apt or a shell script via RUN). Sometimes software documentation has specific recommendations to follow for containerized installs.
It’s common to have the version defined as a variable where a change in value invalidates the docker layer cache. To me it’s unclear how caching would work with your dockerfile, for example, in the event of a upgrade. You could also see how a breaking change (such as one in the paths you are copying) could run into issues with your hardcoded approach.
In the case of software like nginx, I would use the official image, mount config/cert files instead of copying, and extend in my own dockerfile if needed.
copying in key elements from the host machine
Not from the host machine, but from the official nginx image ( nginx:mainline-alpine3.18-slim ). And what it (basically) does is separate the essential commands/files inside a scratch image and gives every command a custom username tag.
Still, I appreciate your input.
A bit late but you might want to have a look at docker multi-stage build documentation which does exactly what you did (start from a base image then copying stuff from it to your own image), something like that:
FROM someimage:sometag AS build [do stuff] FROM minimalimage:someothertag COPY --from=build /some/file /some/other/file [and so on] USER somebody CMD ["/path/somecommand"]
Which will simplify building new images against newer “build” image newer tags easier.
btw, you were quite creative on this one! You also might want to have a look at the distroless image, the goal being to only have the bare minimum to run your application in the image: your executable and its runtime dependencies.
Now you’ve confused me a little bit – is there any difference between a scratch and a distroless image? Aren’t they (technically) the same thing?
That aside, thank you for your input and compliment.
You’re welcome! scratch and distroless are indeed basically the same thing, scratch being the ‘official’ docker minimal image while distroless is from google - as I’m more a Kubernetes user (at home and at work) than a Docker user, I tend to think about distroless first :) - my apologies if my comment was a bit confusing on this matter.
By the way, have fun experimenting with docker (or podman), it’s interesting, widely used both in selfhosting and professional environments, and it’s a great learning experience - and a good way to pass time during these long winter evenings :)
Oh, I see. Thanks for clarifying. And I’ve got to admit that “dockerizing” everything is a fun process indeed. :P
deleted by creator
Eh…just trying to learn some new things regarding common “dockerization”-related things, and improving its security.
I’d never run an image full of random binaries like this. If the container doesn’t have a link to a Containerfile which I can build myself then I don’t use it.