A Safer Container Ecosystem with Docker: Free Docker Hardened Images
Posted by anttiharju 16 hours ago
Comments
Comment by ShakataGaNai 3 hours ago
Where? Lets take a random example: https://hub.docker.com/hardened-images/catalog/dhi/traefik
Ok, where is the source? Open source means I can build it myself, maybe because I'm working in an offline/airgapped/high compliance environment.
I found a "catalogue" https://github.com/docker-hardened-images/catalog/blob/main/... but this isn't a build file, it's some... specialized DHI tool to build? Nothing https://github.com/docker-hardened-images shows me docs where I can build it myself or any sort of "dhi" tool.
Comment by TheDong 14 minutes ago
Meanwhile, nix already has packaged more software than any other distro, and the vast majority of its software can be put into a container image with no additional dependencies (i.e. "hardened" in the same way as these are) with exactly zero extra work specific to each package.
The nixpkgs repository already contains the instructions to build and isolate outputs, there's already a massive cache infrastructure setup, builds are largely reproducible, and docker will have to make all of that for their own tool to reach parity... and without a community behind it like nix has.
Comment by SomaticPirate 14 hours ago
Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).
In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.
For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.
But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.
There is even a govt project called Ironbank to offer something like this to the DoD.
Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.
Comment by fossa1 13 hours ago
Comment by xyzzy123 2 hours ago
Comment by johnnypangs 1 hour ago
https://docs.docker.com/dhi/features/#dhi-enterprise-subscri...
Comment by ExoticPearTree 13 hours ago
Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.
Comment by MrDarcy 12 hours ago
Comment by staticassertion 6 hours ago
Comment by raesene9 14 hours ago
The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...
People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.
Comment by staticassertion 6 hours ago
Comment by idiotsecant 12 hours ago
Comment by raesene9 10 hours ago
Comment by bigstrat2003 11 hours ago
Note that you don't have to be DoD to use Iron Bank images. They are available to other organizations too, though you do have to sign up for an account.
Comment by firesteelrain 6 hours ago
Some images like Vault are pretty bare (eg no shell).
Comment by nonameiguess 7 hours ago
My company makes its own competing product that is basically the same thing, and we (and I specifically) were pretty heavily involved in early Platform One. We sell it, but it's basically just a free add-on to existing software subscriptions, an additional inducement to make a purchase, but it costs nothing extra on on its own.
In any case, I applaud Docker. This can be a surprisingly frustrating thing to do, because you can't always just rebase onto your pre-hardened base image and still have everything work, without taking some care to understand the application you're delivering, which is not your application. It was always my biggest complaint with Ironbank and why I would not recommend anyone actually use it. They break containers constantly because hardening to them just means copying binaries out of the upstream image into a UBI container they patch daily to ensure it never has any CVEs. Sometimes this works, but sometimes it doesn't, and it's fairly predictable, like every time Fedora takes a new glibc version that RHEL doesn't have yet, everything that links against starts segfaulting when you try to copy from one to the other. I've told them this many times, but they still don't seem to get it and keep doing it. Plus, they break tags with the daily patching of the same application version, and you can't pin to a sha because Harbor only holds onto three orphaned shas that are no longer associated with a tag.
So short and long of it, I don't know about meat on the bone, but there is real demand and it's getting greater, at least in any kind of government or otherwise regulated business because the government itself is mandating better supply chain provenance. I don't think it entirely makes sense, frankly. The end customers don't seem to understand that, sure, we're signing the container image because we "built" it in the sense that we put together the series of tarballs described by a json file, but we're also delivering an application we didn't develop, on a base image full of upstream GNU/Linux packages we also didn't develop, and though we can assure you all of our employees are US citizens living in CONUS, we're delivering open source software. It's been contributed to by thousands of people from every continent on the planet stretching decades into the past.
Unfortunately, a lot of customers and sales people alike don't really understand how the open source ecosystem works and expect and promise things that are fundamentally impossible. Nonetheless, we can at least deliver the value inherent in patching the non-application components of an image more frequently than whoever creates the application and puts the original image into a public repo. I don't think that's a ton of value, personally, but it's value, and I've seen it done very wrong with Ironbank, so there's value in doing it right.
I suspect it probably has to be a free add-on to some other kind of subscription in most cases, though. It's hard for me to believe it can really be a viable business on its own. I guess Chainguard is getting by somehow, but it also kind of feels like they're an investor darling getting by on the reputations of its founders based on their past work more than the current product. It's the container ecosystem equivalent of selling an enterprise Linux distro, and I guess at least Redhat, SUSE, and Canonical have all managed to do that, but not by just selling the Linux distro. They need other products plus support and professional services.
I think it's a no-brainer for anyone already selling a Linux distro to do this on top of it, though. You've already got the build infrastructure and organizational processes and systems in place.
Comment by khana 6 hours ago
Comment by tj_591 12 hours ago
A big part of this for us is transparency. That’s why every image ships with VEX statements, extensive attestations, and all the metadata you need to actually understand what you’re running. We want this to be a trustworthy foundation, not just a thinner base image.
We’re also extending this philosophy beyond base images into other content like MCP servers and related components, because the more of the stack that is verifiable and hardened by default, the better it is for the ecosystem.
A few people in the thread asked how this is sustainable. The short answer is that we do offer an enterprise tier for companies that need things like contractual continuous patching SLAs, regulated-industry variants (FIPS, etc.), and secure customizations with full provenance and attestations. Those things carry very real ongoing costs, so keeping them in Enterprise allows us to make the entire hardened catalog free for the community.
Glad to see the conversation happening here. We hope this helps teams ship software with a stronger security posture and a bit more confidence.
Comment by inChargeOfIT 12 hours ago
Comment by sschueller 12 hours ago
Comment by m463 2 hours ago
To the point that redhat created podman that can do what you want.
Comment by BSVogler 14 hours ago
With Bitnami discontinuing their offer, we recently switched to other providers. For some we are using a helm chart and this new offer provides some helm charts but for some software just the image. I would be interested to give this a try but e.g. the python image only various '(dev)' images while the guide mentions the non-dev images. So this requires some planning.
EDIT: Digging deeper, I notice it requires a PAT and a PAT is bound to a personal account. I guess you need the enterprise offering for organisation support. I am not going to waste my time to contact them for an enterprise offer for a small start-up. What is the use case for CVE hardened images that you cannot properly run in an CICD and only on your dev machine? Are there companies that need to follow compliance rules or need this security guarantee but don't have CICD in place?
Comment by parasubvert 13 hours ago
The enterprise hardened images license seems to be a different offering for offline mirroring or more strict compliance…
The main reason for CVE hardened images is that it’s hard to trust individuals to do it right at scale, even with CI/CD. You’re having to wire together your own scan & update process. In practice teams will use pinned versions, delays in fixing, turn off scanning, etc. This is easy mode
Comment by 0_gravitas 12 hours ago
1. 'generous' initial offering to establish a userbase/ecosystem/network-effect
2. "oh teehee we're actually gonna have to start charging for that sorry we know that you've potentially built a lot of your infrastructure around this thing"
3. $$$
Comment by nine_k 15 hours ago
Offering image hardening to custom images looks like a reasonable way for Docker to have a source of sustained income. Regulated industries like banks, insurers, or governmental agencies are likely interested.
Comment by scottydelta 15 hours ago
Bait and switch once the adoption happens has become way too common in the industry.
Comment by politelemon 14 hours ago
Comment by acdha 10 hours ago
Comment by cedws 14 hours ago
Comment by simlevesque 14 hours ago
Comment by verdverm 14 hours ago
It's what the people who created OG Docker are building now
Comment by scoodah 13 hours ago
Comment by cedws 11 hours ago
Comment by nickstinemates 3 hours ago
Comment by seemaze 13 hours ago
Comment by pploug 15 hours ago
Comment by skyline879 15 hours ago
Comment by imglorp 14 hours ago
Not a problem for casual users but even a small team like mine, a dozen people with around a dozen public images, can hit the pull limit deploying a dozen landscapes a day. We just cache all the public images ourselves and avoid it.
https://www.docker.com/blog/revisiting-docker-hub-policies-p...
Comment by nunez 2 hours ago
Comment by simlevesque 15 hours ago
> Is Docker sunsetting the Free Team plan?
> No. Docker communicated its intent to sunset the Docker Free Team plan on March 14, 2023, but this decision was reversed on March 24, 2023.
Comment by pploug 14 hours ago
Comment by yjftsjthsd-h 14 hours ago
There's an excellent reason: They're login gated, which is at best unnecessary friction. Took me straight from "oh, let me try it" to "nope, not gonna bother".
Comment by dudeWithAMood 14 hours ago
Comment by darkwater 14 hours ago
Comment by wolfi1 13 hours ago
Comment by divmain 12 hours ago
Comment by kevinb2222 13 hours ago
https://www.docker.com/blog/security-that-moves-fast-dockers...
Note: I work at Docker
Comment by wolfi1 13 hours ago
Comment by shepherdjerred 12 hours ago
This would be like expecting AWS to protect your EC2 instance from a postinstall script
Comment by acdha 10 hours ago
Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.
Comment by kevinb2222 13 hours ago
Comment by tecleandor 15 hours ago
Comment by nunez 2 hours ago
Comment by jacques_chester 13 hours ago
Comment by lrvick 7 hours ago
None of the alternatives come anywhere close to what we needed to satisfy a threat model that trusts no single maintainer or computer, so we started over from actually zero.
Comment by politelemon 14 hours ago
Comment by jiehong 14 hours ago
But, we pay for support already.
Nice from docker!
Comment by nunez 2 hours ago
Comment by jitl 15 hours ago
There's a "Make a request" button, but it links to this 404-ing GitHub URL: https://github.com/docker-hardened-images/discussion/issues
oh well. hope its good stuff otherwise.
Comment by pploug 15 hours ago
Comment by kamrannetic 15 hours ago
Comment by progbits 15 hours ago
Chainguard still has better CVE response time and can better guarantee you zero active exploits found by your prod scanners.
(No affiliation with either, but we use chainguard at work, and used to use bitnami too before I ripped it all out)
Comment by mmbleh 15 hours ago
Comment by dlor 14 hours ago
I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).
We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.
The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.
so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.
We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.
All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories
You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.
Comment by digi59404 15 hours ago
Do with that knowledge what you may.
Comment by chrisweekly 13 hours ago
Comment by movedx 13 hours ago
Comment by mertleee 11 hours ago
Comment by twelvechess 15 hours ago
Comment by mrbluecoat 9 hours ago
Comment by fire2dev 14 hours ago
Comment by cgfjtynzdrfht 12 hours ago
What about a safer container ecosystem without Docker?
Podman solved rootless containers and everything else under the sun by now.
All docker is doing is playing catch-up.
But guess what? They are obsolete. It's just time until they go the way of HashiCorp's Vagrant.
Docker is only making money of enterprise whales by now, and eventually that profit will dry up, too.
If you are still relying on docker, it is time to migrate.
Comment by nickjj 9 hours ago
I did work for a client recently where they were using Podman Desktop and developers are using Macbooks (Mx series).
They tried to run an amd64 image on their machine. When building a certain Docker image they had it was segfaulting with a really generic error and it wasn't specific to a RUN command because if you keep commenting out the next one, it would appear on the next one. The stack trace was related to Podman Compose's code base.
Turns out it's a verified bug with Podman with an open issue on GitHub that's affecting a lot of people.
I've been using Docker for 10 years with Docker Engine, Compose, Desktop, Toolbox, etc. and never once have I seen a single segfault, not even once.
You know what's interesting? It worked perfectly with Docker Desktop. Literally install Docker Desktop, build it and run it. Zero issues and up and running in 10 minutes.
That company got to pay me for a few hours of debugging and now they are happily paying clients for Docker Desktop because the cost for the team license is so low that having things "just work" for everyone is a lot cheaper than constant struggles and paying people to identify problems.
Docker Desktop is really robust, it's a dependable tool and absolutely worth using. It's also free until you're mega successful and are generating 8 figures of revenue.
Comment by figmert 34 minutes ago
Shouldn't be using podman compose. It's flimsy and doesn't work very well, and I'm pretty sure it doesn't have Red Hat's direct support.
Instead, activate Podman's Docker API compatibility socket, and simply set your `DOCKER_HOST` env var to that, and use your general docker client commands such as `docker`, `docker compose` and anything else that uses the Docker API. There are very few things that don't work with this, and the few things that don't are advanced setups.
Comment by k8sToGo 1 hour ago