A Safer Container Ecosystem with Docker: Free Docker Hardened Images

Posted by anttiharju 16 hours ago

Counter317Comment71OpenOriginal

Comments

Comment by ShakataGaNai 3 hours ago

> Open Source

Where? Lets take a random example: https://hub.docker.com/hardened-images/catalog/dhi/traefik

Ok, where is the source? Open source means I can build it myself, maybe because I'm working in an offline/airgapped/high compliance environment.

I found a "catalogue" https://github.com/docker-hardened-images/catalog/blob/main/... but this isn't a build file, it's some... specialized DHI tool to build? Nothing https://github.com/docker-hardened-images shows me docs where I can build it myself or any sort of "dhi" tool.

Comment by TheDong 14 minutes ago

Docker has to maintain relatively complicated looking build instructions like this to make these images: https://github.com/docker-hardened-images/catalog/blob/b5c7a...

Meanwhile, nix already has packaged more software than any other distro, and the vast majority of its software can be put into a container image with no additional dependencies (i.e. "hardened" in the same way as these are) with exactly zero extra work specific to each package.

The nixpkgs repository already contains the instructions to build and isolate outputs, there's already a massive cache infrastructure setup, builds are largely reproducible, and docker will have to make all of that for their own tool to reach parity... and without a community behind it like nix has.

Comment by SomaticPirate 14 hours ago

Wow, "hardened image" market is getting saturated. I saw atleast 3 companies offering this at Kubecon.

Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).

In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.

For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.

But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.

There is even a govt project called Ironbank to offer something like this to the DoD.

Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.

Comment by fossa1 13 hours ago

The real question isn't whether the market is saturated, it's whether it still exists once Docker gives away the core value prop for free.

Comment by xyzzy123 2 hours ago

Given Docker's track record it won't be free indefinitely, this is a move to gauge demand and generate leads.

Comment by johnnypangs 1 hour ago

Comment by ExoticPearTree 13 hours ago

Most likely yes. There are a lot enterprises out there that only trust paid subscriptions.

Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.

Comment by MrDarcy 12 hours ago

Counterpoint: most likely no, it really is about all the downstream impacts of critical and high findings in scanners. The risk of failing a soc2 audit for example. Once that risk is removed then the value prop is also removed.

Comment by staticassertion 6 hours ago

I don't think this is the case here. The reason you want to lower your CVEs is to say "we're compliant" or "it's not our fault a bad thing happened, we use hardened images". Paying doesn't really change that - your SOC2 doesn't ask how much you spent, it asks what your patching policy is. This makes that checkbox free.

Comment by 13 hours ago

Comment by raesene9 14 hours ago

Yep differentiation is tricky here. Chainguard are expanding out to VM images and programming language repos, but the core of hardened container images has a lot of options.

The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...

People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.

Comment by staticassertion 6 hours ago

If you distribute images to your customers it is a huge benefit to not have them come back with CVEs that really don't matter but are still going to make them freak out.

Comment by idiotsecant 12 hours ago

Depends what type of shop. If you're in a big dinosaur org and you 'roll your own' that ends up having a vulnerability, you get fired. If you pay someone else and it ends up having a vulnerability you get to blame it on the vendor.

Comment by raesene9 10 hours ago

Perhaps in theory, but I’d be willing to wager that most dinosaur orgs have so many unpatched vulns, they would need to fire everyone in their IT org to cover just the criticals

Comment by bigstrat2003 11 hours ago

> There is even a govt project called Ironbank to offer something like this to the DoD.

Note that you don't have to be DoD to use Iron Bank images. They are available to other organizations too, though you do have to sign up for an account.

Comment by firesteelrain 6 hours ago

Many IronBank images have CVEs because many are based on ubi8/9 and while some have ubi8/9-micro bases, there are still CVEs. IronBank will disposition the critical and highs. You can access their Vulnerability Tracking Tool and get a free report.

Some images like Vault are pretty bare (eg no shell).

Comment by nonameiguess 7 hours ago

Ironbank was actually doing this before Chainguard existed, and as another mentioned, it's not restricted to DoD and also free to use for anyone, though you do need an account.

My company makes its own competing product that is basically the same thing, and we (and I specifically) were pretty heavily involved in early Platform One. We sell it, but it's basically just a free add-on to existing software subscriptions, an additional inducement to make a purchase, but it costs nothing extra on on its own.

In any case, I applaud Docker. This can be a surprisingly frustrating thing to do, because you can't always just rebase onto your pre-hardened base image and still have everything work, without taking some care to understand the application you're delivering, which is not your application. It was always my biggest complaint with Ironbank and why I would not recommend anyone actually use it. They break containers constantly because hardening to them just means copying binaries out of the upstream image into a UBI container they patch daily to ensure it never has any CVEs. Sometimes this works, but sometimes it doesn't, and it's fairly predictable, like every time Fedora takes a new glibc version that RHEL doesn't have yet, everything that links against starts segfaulting when you try to copy from one to the other. I've told them this many times, but they still don't seem to get it and keep doing it. Plus, they break tags with the daily patching of the same application version, and you can't pin to a sha because Harbor only holds onto three orphaned shas that are no longer associated with a tag.

So short and long of it, I don't know about meat on the bone, but there is real demand and it's getting greater, at least in any kind of government or otherwise regulated business because the government itself is mandating better supply chain provenance. I don't think it entirely makes sense, frankly. The end customers don't seem to understand that, sure, we're signing the container image because we "built" it in the sense that we put together the series of tarballs described by a json file, but we're also delivering an application we didn't develop, on a base image full of upstream GNU/Linux packages we also didn't develop, and though we can assure you all of our employees are US citizens living in CONUS, we're delivering open source software. It's been contributed to by thousands of people from every continent on the planet stretching decades into the past.

Unfortunately, a lot of customers and sales people alike don't really understand how the open source ecosystem works and expect and promise things that are fundamentally impossible. Nonetheless, we can at least deliver the value inherent in patching the non-application components of an image more frequently than whoever creates the application and puts the original image into a public repo. I don't think that's a ton of value, personally, but it's value, and I've seen it done very wrong with Ironbank, so there's value in doing it right.

I suspect it probably has to be a free add-on to some other kind of subscription in most cases, though. It's hard for me to believe it can really be a viable business on its own. I guess Chainguard is getting by somehow, but it also kind of feels like they're an investor darling getting by on the reputations of its founders based on their past work more than the current product. It's the container ecosystem equivalent of selling an enterprise Linux distro, and I guess at least Redhat, SUSE, and Canonical have all managed to do that, but not by just selling the Linux distro. They need other products plus support and professional services.

I think it's a no-brainer for anyone already selling a Linux distro to do this on top of it, though. You've already got the build infrastructure and organizational processes and systems in place.

Comment by khana 6 hours ago

[dead]

Comment by tj_591 12 hours ago

Hi, I work at Docker. Really appreciate the thoughtful discussion here. We’re excited to make Hardened Images free and open because we believe secure-by-default should be the starting point for every developer, not something you bolt on later.

A big part of this for us is transparency. That’s why every image ships with VEX statements, extensive attestations, and all the metadata you need to actually understand what you’re running. We want this to be a trustworthy foundation, not just a thinner base image.

We’re also extending this philosophy beyond base images into other content like MCP servers and related components, because the more of the stack that is verifiable and hardened by default, the better it is for the ecosystem.

A few people in the thread asked how this is sustainable. The short answer is that we do offer an enterprise tier for companies that need things like contractual continuous patching SLAs, regulated-industry variants (FIPS, etc.), and secure customizations with full provenance and attestations. Those things carry very real ongoing costs, so keeping them in Enterprise allows us to make the entire hardened catalog free for the community.

Glad to see the conversation happening here. We hope this helps teams ship software with a stronger security posture and a bit more confidence.

Comment by 2 hours ago

Comment by inChargeOfIT 12 hours ago

It's free for now, just like registries were "free" and docker desktop was free.. until they weren't. I am not against Docker capitalizing and charging for their services (as they should); however, the pattern of offering a service for free and then reneging after it's widely adopted, makes me hesitant to adopt any of their offerings.

Comment by sschueller 12 hours ago

Let's hope cases like this will make companies think twice before doing a switcheroo the next time: https://topclassactions.com/lawsuit-settlements/open-lawsuit...

Comment by m463 2 hours ago

more annoying is that they prevented the software from being configured to configure the registry to not be theirs.

To the point that redhat created podman that can do what you want.

Comment by BSVogler 14 hours ago

First look shows me that this is not an easy drop in replacement. First thing is this requires a log-in and makes me wonder why this is required. Perhaps some upselling coming.

With Bitnami discontinuing their offer, we recently switched to other providers. For some we are using a helm chart and this new offer provides some helm charts but for some software just the image. I would be interested to give this a try but e.g. the python image only various '(dev)' images while the guide mentions the non-dev images. So this requires some planning.

EDIT: Digging deeper, I notice it requires a PAT and a PAT is bound to a personal account. I guess you need the enterprise offering for organisation support. I am not going to waste my time to contact them for an enterprise offer for a small start-up. What is the use case for CVE hardened images that you cannot properly run in an CICD and only on your dev machine? Are there companies that need to follow compliance rules or need this security guarantee but don't have CICD in place?

Comment by parasubvert 13 hours ago

I think Docker for Teams is $15/month per seat. https://www.docker.com/pricing/

The enterprise hardened images license seems to be a different offering for offline mirroring or more strict compliance…

The main reason for CVE hardened images is that it’s hard to trust individuals to do it right at scale, even with CI/CD. You’re having to wire together your own scan & update process. In practice teams will use pinned versions, delays in fixing, turn off scanning, etc. This is easy mode

Comment by 0_gravitas 12 hours ago

The proximity of this and Bitnami pulling their 'free hardened images' is amusing, and I'm just as concerned about another (eventual, but imminent) rug-pull down the line. Docker Inc historically seems comfortable with the typical VC/"growth"-fueled strat of:

1. 'generous' initial offering to establish a userbase/ecosystem/network-effect

2. "oh teehee we're actually gonna have to start charging for that sorry we know that you've potentially built a lot of your infrastructure around this thing"

3. $$$

Comment by nine_k 15 hours ago

The news: Docker Hardened Images (DHI) are now free to use for everyone. No reason not to use them.

Offering image hardening to custom images looks like a reasonable way for Docker to have a source of sustained income. Regulated industries like banks, insurers, or governmental agencies are likely interested.

Comment by scottydelta 15 hours ago

After their last rug pull when they started charging projects for registry after parading it as a fully free service for almost a decade, it has become hard to trust anything free.

Bait and switch once the adoption happens has become way too common in the industry.

Comment by politelemon 14 hours ago

Given the wealth and productivity creation that they're responsible for enabling across the industry, they deserve to be paid for it. There is no way for them to have achieved this with zero friction.

Comment by acdha 10 hours ago

I totally support companies charging for things which cost money to make but I think the strategy of saying something is free and later reneging is a very risky strategy. You’ll get some license sales after cold-calling people’s bosses or breaking builds but they won’t thank you for it.

Comment by cedws 14 hours ago

Docker is a company I just can’t hate on. They’ve completely transformed how software is deployed. Containers gained so much momentum it kind of outgrew them and they lost a lot of potential business. I would hardly call beginning to charge after a decade of free service a rug pull, especially now that dependence on Docker’s registry is shrinking all the time.

Comment by simlevesque 14 hours ago

I don't hate them. But I don't want to depend on them for any product I manage.

Comment by verdverm 14 hours ago

Have you checked out Dagger?

It's what the people who created OG Docker are building now

Comment by scoodah 13 hours ago

Dagger is one of those things I want to like, but find incredibly painful to use in practice.

Comment by cedws 11 hours ago

I have tried it but wasn't a fan. I tried to convert one of our Actions workflows and that proved to be a PITA that I gave up on. It seems now the project is pivoting into AI stuff.

Comment by nickstinemates 3 hours ago

Well, one of them.

Comment by seemaze 13 hours ago

Feels like they're trying to put the cat back in the bag and recoup a fraction of the exodus from the registry thing.

Comment by pploug 15 hours ago

Projects are not charged for hub usage

Comment by skyline879 15 hours ago

When was this?

Comment by imglorp 14 hours ago

> 100 pulls per 6 hours for unauthenticated users and 200 pulls per 6 hours for Docker Personal users

Not a problem for casual users but even a small team like mine, a dozen people with around a dozen public images, can hit the pull limit deploying a dozen landscapes a day. We just cache all the public images ourselves and avoid it.

https://www.docker.com/blog/revisiting-docker-hub-policies-p...

Comment by nunez 2 hours ago

It becomes a problem if you're testing something in local Kubernetes clusters that are ephemeral

Comment by simlevesque 15 hours ago

https://www.docker.com/developers/free-team-faq/

> Is Docker sunsetting the Free Team plan?

> No. Docker communicated its intent to sunset the Docker Free Team plan on March 14, 2023, but this decision was reversed on March 24, 2023.

Comment by pploug 14 hours ago

For oss projects with heavy pulls, the (free) dsos programme removes all rate limits on their public images, the intention was never to impact projects, but rather mega corporations using hub as free hosting:

https://www.docker.com/community/open-source/application/

Comment by yjftsjthsd-h 14 hours ago

> No reason not to use them.

There's an excellent reason: They're login gated, which is at best unnecessary friction. Took me straight from "oh, let me try it" to "nope, not gonna bother".

Comment by dudeWithAMood 14 hours ago

I am a little confused because I got a 401 when I tried to pull an image from there. Do I need a login or something? For a free image it sure doesn't feel that way.

Comment by darkwater 14 hours ago

This smells like LLM generated

Comment by wolfi1 13 hours ago

hardened images are cool, definitely, but I'm not sure what it actually means? just systems with the latest patches or stricter config rules as well?for example: would any of these images have mitigated or even prevented Shai-Hulud [12]?

Comment by divmain 12 hours ago

Docker Hardened Images integrate Socket Firewall, which provides protection from threats like Shai-Hulud during build steps. You can read our partnership announcement over here: https://socket.dev/blog/socket-firewall-now-available-in-doc...

Comment by kevinb2222 13 hours ago

Docker Hardened Images are built from scratch with the minimal packages to run the image. The hardened images didn't contain any compromised packages for Shai-Hulud.

https://www.docker.com/blog/security-that-moves-fast-dockers...

Note: I work at Docker

Comment by wolfi1 13 hours ago

yeah, but if you would have installed with npm your software, would the postinstall script have been executed?

Comment by shepherdjerred 12 hours ago

Of course? They are only concerned with the base image. What you do with it is your responsibility

This would be like expecting AWS to protect your EC2 instance from a postinstall script

Comment by acdha 10 hours ago

The difference is that they’re charging extra for it, so people want to see benefits they could take to their management to justify the extra cost. The NPM stuff has a lot of people’s attention right now so it’s natural to ask whether something would have blocked what your CISO is probably asking about since you have an unlimited number of possible security purchase options. One of the Docker employees mentioned one relevant feature: https://socket.dev/blog/socket-firewall-now-available-in-doc...

Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.

Comment by kevinb2222 13 hours ago

Hardened base images don't restrict what you add on top of them. That's where scanners like Docker Scout, Trivy, Grype, and more come in to review the complete image that you have built.

Comment by tecleandor 15 hours ago

Is this the response to the Bitnami/VMWare/Broadcom Helm charts thing?

Comment by nunez 2 hours ago

Yes IMO

Comment by jacques_chester 13 hours ago

My guess is that it's a response to "Chainguard are growing so fast that VCs have fought each other to give them hundreds of millions in 3 years despite having no AI play".

Comment by lrvick 7 hours ago

For anyone that wants dead simple LFS style, full source bootstrapped, deterministic, multi-party compiled/signed container native images with hash pinning for your entire dependency graph, that will be free forever, check out stagex.

None of the alternatives come anywhere close to what we needed to satisfy a threat model that trusts no single maintainer or computer, so we started over from actually zero.

https://stagex.tools

Comment by politelemon 14 hours ago

I appreciate what they're doing here, which is something I haven't seen other vendors doing.

Comment by jiehong 14 hours ago

At $work, we switched everything to Redhat’s ubi images (micro and minimal) for that.

But, we pay for support already.

Nice from docker!

Comment by nunez 2 hours ago

Red Hat launched an equivalent effort in October: https://www.redhat.com/en/technologies/linux-platforms/enter...

Comment by jitl 15 hours ago

I went to "Hardened Images Catalog" and searched for pgbouncer, not found (https://hub.docker.com/hardened-images/catalog?search=pgboun...)

There's a "Make a request" button, but it links to this 404-ing GitHub URL: https://github.com/docker-hardened-images/discussion/issues

oh well. hope its good stuff otherwise.

Comment by pploug 15 hours ago

Thanks for reporting, team is fixing it, the right url is: https://github.com/docker-hardened-images/catalog/issues/

Comment by 15 hours ago

Comment by kamrannetic 15 hours ago

no need for chainguard/bitnami anymore?

Comment by progbits 15 hours ago

Bitnami is in broadcom hell, nobody should use that.

Chainguard still has better CVE response time and can better guarantee you zero active exploits found by your prod scanners.

(No affiliation with either, but we use chainguard at work, and used to use bitnami too before I ripped it all out)

Comment by mmbleh 15 hours ago

CVE response time is a toss up, they all patch fast. Chainguard can only guarantee zero active exploits because they control their own exploit feed, and don't publish anything on it until they've patched. So while this makes it look better, it may not actually be better

Comment by dlor 14 hours ago

Hey!

I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).

We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.

The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.

so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.

We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.

All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories

You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.

Comment by digi59404 15 hours ago

FWIW - A whole host of the pre-IPO GitLab folks went to Chainguard. A lot of them, many in leadership roles. Most importantly, In Sales Leadership. These are people whom don’t really believe in high-pressure sales. Rather they aim to show the value and not squeeze customers for profit or making a number on a chart go up.

Do with that knowledge what you may.

Comment by chrisweekly 13 hours ago

Thanks for sharing. This kind of "color" isn't always easy to ascertain, but (for me, at least) it plays a part in vendor selection.

Comment by 8 hours ago

Comment by movedx 13 hours ago

Thanks for only doing this like, ten years later after all the damage is done.

Comment by mertleee 11 hours ago

[dead]

Comment by twelvechess 15 hours ago

[dead]

Comment by mrbluecoat 9 hours ago

Comment by fire2dev 14 hours ago

[dead]

Comment by cgfjtynzdrfht 12 hours ago

Just hear me out.

What about a safer container ecosystem without Docker?

Podman solved rootless containers and everything else under the sun by now.

All docker is doing is playing catch-up.

But guess what? They are obsolete. It's just time until they go the way of HashiCorp's Vagrant.

Docker is only making money of enterprise whales by now, and eventually that profit will dry up, too.

If you are still relying on docker, it is time to migrate.

https://podman-desktop.io/docs/migrating-from-docker

Comment by nickjj 9 hours ago

> If you are still relying on docker, it is time to migrate.

I did work for a client recently where they were using Podman Desktop and developers are using Macbooks (Mx series).

They tried to run an amd64 image on their machine. When building a certain Docker image they had it was segfaulting with a really generic error and it wasn't specific to a RUN command because if you keep commenting out the next one, it would appear on the next one. The stack trace was related to Podman Compose's code base.

Turns out it's a verified bug with Podman with an open issue on GitHub that's affecting a lot of people.

I've been using Docker for 10 years with Docker Engine, Compose, Desktop, Toolbox, etc. and never once have I seen a single segfault, not even once.

You know what's interesting? It worked perfectly with Docker Desktop. Literally install Docker Desktop, build it and run it. Zero issues and up and running in 10 minutes.

That company got to pay me for a few hours of debugging and now they are happily paying clients for Docker Desktop because the cost for the team license is so low that having things "just work" for everyone is a lot cheaper than constant struggles and paying people to identify problems.

Docker Desktop is really robust, it's a dependable tool and absolutely worth using. It's also free until you're mega successful and are generating 8 figures of revenue.

Comment by figmert 34 minutes ago

> Podman Compose

Shouldn't be using podman compose. It's flimsy and doesn't work very well, and I'm pretty sure it doesn't have Red Hat's direct support.

Instead, activate Podman's Docker API compatibility socket, and simply set your `DOCKER_HOST` env var to that, and use your general docker client commands such as `docker`, `docker compose` and anything else that uses the Docker API. There are very few things that don't work with this, and the few things that don't are advanced setups.

Comment by k8sToGo 1 hour ago

Podman has plenty of problems. Rootless for example has super slow networking. Last time I checked it was not a solved problem.