I got hacked: My Hetzner server started mining Monero
Posted by jakelsaunders94 12 hours ago
Comments
Comment by 3np 10 hours ago
I disrecommend UFW.
firewalld is a much better pick in current year and will not grow unmaintainable the way UFW rules can.
firewall-cmd --persistent --set-default-zone=block
firewall-cmd --persistent --zone=block --add-service=ssh
firewall-cmd --persistent --zone=block --add-service=https
firewall-cmd --persistent --zone=block --add-port=80/tcp
firewall-cmd --reload
Configuration is backed by xml files in /etc/firewalld and /usr/lib/firewalld instead of the brittle pile of sticks that is the ufw rules files. Use the nftables backend unless you have your own reasons for needing legacy iptables.Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway. Depending on your configuration, those firewall rules in OP may not actually do anything to prevent docker from opening incoming ports.
Newer versions of firewalld gives an easy way to configure this via StrictForwardPorts=yes in /etc/firewalld/firewalld.conf.
Comment by dizhn 2 hours ago
In my own use I have 10.0.10.11 on the vm that I host docker stuff. It doesn't even have its own public IP meaning I could actually expose to 0.0.0.0 if I wanted to but things might change in the future so it's a precaution. That IP is only accessible via wireguard and by the other machines that share the same subnet so reverse proxying with caddy on a public IP is super easy.
Comment by zwnow 1 hour ago
Comment by szszrk 8 minutes ago
So you can create multiple addresses with multiple separate "domains" mapped statically in /etc/hosts, and allow multiple apps to listen on "the same" port without conflicts.
Comment by exceptione 9 hours ago
> Specifically for docker it is a very common gotcha that the container runtime can and will bypass firewall rules and open ports anyway.
Like I said in another comment, drop Docker, install podman.Comment by 3np 9 hours ago
Comment by jsheard 9 hours ago
Comment by 3np 9 hours ago
Same as for docker, yes?
Comment by exceptione 8 hours ago
Networking is just better in podman.
Comment by joshuaissac 6 hours ago
That page does not address rootless Docker, which can be installed (not just run) without root, so it would not have the ability to clobber firewall rules.
Comment by figassis 4 hours ago
Comment by gus_ 8 hours ago
In order to stop these attacks, restrict outbound connections from unknown / not allowed binaries.
This kind of malware in particular requires outbound connections to the mining pools. Others downloads scripts or binaries from remote servers, or try to communicate with their c2c servers.
On the other hand, removing exec permissions to /tmp, /var/tmp and /dev/shm is also useful.
Comment by crote 1 hour ago
If this weren't the case, plenty of containers could probably have a fully read-only filesystem.
Comment by PeterStuer 1 hour ago
Comment by reddalo 47 minutes ago
Comment by crote 1 hour ago
Imagine container A which exposes tightly-coupled services X and Y. Container B should be able to access only X, container C should be able to accesd only Y.
For some reason there just isn't a convenient way to do this with Docker or Podman. Last time I looked into it, it required having to manually juggle the IP addressed assigned to the container and having the service explicitly bind to it - which is just needlessly complicated. Can firewalls solve this?
Comment by peanut-walrus 1 hour ago
Comment by sph 3 hours ago
It’s not like iptables was any better, but it was more intuitive as it spoke about IPs and ports, not high-level arbitrary constructs such as zones and services defined in some XML file. And since firewalld uses iptables/nftables underneath, I wonder why do I need a worse leaky abstraction on top of what I already know.
I truly hate firewalld.
Comment by bingo-bongo 2 hours ago
I’d love a Linux firewall configured with a sane config file and I think BSD really nailed it. It’s easy to configure and still human readable, even for more advanced firewall gateway setups with many interfaces/zones.
A have no doubt that Linux can do all the same stuff feature-wise, but oh god the UX :/
Comment by ptman 1 hour ago
Comment by Hendrikto 1 hour ago
Comment by Ey7NFZ3P0nzAe 3 hours ago
Comment by skirge 3 hours ago
Comment by rglover 8 hours ago
Comment by denkmoon 9 hours ago
Comment by 3np 9 hours ago
Comment by egberts1 5 hours ago
I mean there are some payload over payload like GRE VPE/VXLAN/VLAN or IPSec that needs to be written in raw nft if using Foomuuni but it works!.
But I love the Shorewall approach and your configuration gracefully encapsulated Shorewall mechanic.
Disclaimer: I maintain vim-syntax-nftables syntax highlighter repo at Github.
Comment by lloydatkinson 9 hours ago
This sounds like great news. I followed some of the open issues about this on GitHub and it never really got a satisfactory fix. I found some previous threads on this "StrictForwardPorts": https://news.ycombinator.com/item?id=42603136.
Comment by kunley 44 minutes ago
Comment by esaym 3 hours ago
I'm not even sure what to say, or think, or even how to feel about the frontend ecosystem at this point. I've been debating on leaving the whole "web app" ecosystem as my main employment ventures and applying to some places requiring C++. C++ seems much easier to understand than what ever the latest frontend fad is. /rant
Comment by h33t-l4x0r 2 hours ago
Comment by mnahkies 2 hours ago
Unless you're running a static html export - eg: not running the nextjs server, but serving through nginx or similar
Comment by tgtweak 11 hours ago
Comment by tracker1 10 hours ago
Comment by 3eb7988a1663 5 hours ago
Comment by hxtk 3 hours ago
Unfortunately, there is no way to specify those `emptyDir` volumes as `noexec` [1].
I think the docker equivalent is `--tmpfs` for the `emptyDir` volumes.
Comment by flowerthoughts 2 hours ago
Having such a nice layered buildsystem with mountpoints, I'm amazed Docker made readonly an afterthought.
Comment by subscribed 1 hour ago
Comment by s_ting765 3 hours ago
Comment by freedomben 10 hours ago
Comment by jakelsaunders94 10 hours ago
Comment by fragmede 10 hours ago
Comment by Koffiepoeder 2 hours ago
Comment by tgtweak 4 hours ago
Comment by miladyincontrol 9 hours ago
Comment by danparsonson 10 hours ago
Comment by tete 9 hours ago
There are way more important things like actually knowing that you are running software with widely known RCE that don't even use established mechanisms to sandbox themselves it seems.
The way the author describes docker being the savior appears to be sheer luck.
Comment by danparsonson 4 hours ago
Good security is layered.
Comment by seszett 3 hours ago
Comment by spoaceman7777 2 hours ago
Just use a firewall.
Comment by seszett 2 hours ago
The firewall is there as a safeguard in case a service is temporarily misconfigured, it should certainly not be the only thing standing between your services and the internet.
Comment by Nextgrid 10 hours ago
I guess you can have the appserver fully firewalled and have another bastion host acting as an HTTP proxy, both for inbound as well as outbound connections. But it's not trivial to set up especially for the outbound scenario.
Comment by danparsonson 10 hours ago
Comment by Nextgrid 10 hours ago
Comment by denkmoon 9 hours ago
Comment by drnick1 4 hours ago
Comment by metafunctor 1 hour ago
In this model, hosts don’t need any direct internet connectivity or access to public DNS. All outbound traffic is forced through the proxy, giving you full control over where each host is allowed to connect.
It’s not painless: you must maintain a whitelist of allowed URLs and HTTP methods, distribute a trusted CA certificate, and ensure all software is configured to use the proxy.
Comment by t0mk 1 hour ago
I do it for a really long time already, and until now I am not sure if it has any benefit or it's just umbrella in a sideways storm.
Comment by lordnacho 1 hour ago
I don't think it's wrong, it's just not the same as eg using a yubikey.
Comment by danw1979 1 hour ago
I know port scanners are a thing but the act of using non-default ports seems unreasonably effective at preventing most security problems.
Comment by jraph 1 hour ago
Comment by dizhn 2 hours ago
Comment by jwrallie 9 hours ago
Comment by figassis 4 hours ago
App servers run docker, with images that run a single executable (no os, no shell), strict cpu and memory limits. Most of my apps only require very limited temporary storage so usually no need to mount anything. So good luck executing anything in there.
I used, way back in the day, to run Wordpress sites. Would get hacked monthly every possible way. Learned so much, including the fact that often your app is your threat. With Wordpress, every plugin is a vector. Also the ability to easily hop into an instance and rewrite running code (looking at you scripting languages incl JS) is terrible. This motivated my move to Go. The code I compiled is what will run. Period.
Comment by 3abiton 5 hours ago
Comment by dizhn 2 hours ago
Comment by V__ 12 hours ago
Is that the case, though? My understanding was, that even if I run a docker container as root and the container is 100% compromised, there still would need to be a vulnerability in docker for it to “attack” the host, or am I missing something?
Comment by d4mi3n 11 hours ago
The core of the problem here is that process isolation doesn't save you from whole classes of attack vectors or misconfigurations that open you up to nasty surprises. Docker is great, just don't think of it as a sandbox to run untrusted code.
Comment by tgsovlerkhgsel 2 hours ago
Of course if you have a kernel exploit you'd be able to break out (this is what gvisor mitigates to some extent), nothing seems to really protect against rowhammer/memory timing style attacks (but they don't seem to be commonly used). Beyond this, the main misconfigurations seem to be too wide volume bindings (e.g. something that allows access to the docker control socket from inside the container, or an obviously stupid mount like mounting your root inside the container).
Am I missing something?
Comment by socalgal2 11 hours ago
Comment by freedomben 10 hours ago
Comment by fragmede 10 hours ago
Comment by TacticalCoder 8 hours ago
Attacker now needs a Docker exploit and then a VM exploit before getting to the hypervisor (and, no, pwning the VM ain't the same as pwning the hypervisor).
Comment by windexh8er 4 hours ago
Not only does it allow me to partition the host for workloads but I also get security boundaries as well. While it may be a slight performance hit the segmentation also makes more logical sense in the way I view the workloads. Finally, it's trivial to template and script, so it's very low maintenance and allows for me to kill an LXC and just reprovision it if I need to make any significant changes. And I never need to migrate any data in this model (or very rarely).
Comment by briHass 5 hours ago
Comment by dist-epoch 10 hours ago
but will not stop serious malware
Comment by hsbauauvhabzb 10 hours ago
Docker is pretty much the same but supposedly more flimsy.
Both have non-obvious configuration weaknesses that can lead to escapes.
Comment by hoppp 10 hours ago
Comment by hsbauauvhabzb 10 hours ago
Comment by z3t4 3 hours ago
Comment by michaelt 11 hours ago
Second, even if your Docker container is configured properly, the attacker gets to call themselves root and talk to the kernel. It's a security boundary, sure, but it's not as battle-tested as the isolation of not being root, or the isolation between VMs.
Thirdly, in the stock configuration processes inside a docker container can use loads of RAM (causing random things to get swapped to disk or OOM killed), can consume lots of CPU, and can fill your disk up. If you consider denial-of-service an attack, there you are.
Fourthly, there are a bunch of settings that disable the security boundary, and a lot of guides online will tell you to use them. Doing something in Docker that needs to access hot-plugged webcams? Hmm, it's not working unless I set --privileged - oops, there goes the security boundary. Trying to attach a debugger while developing and you set CAP_SYS_PTRACE? Bypasses the security boundary. Things like that.
Comment by cyphar 4 hours ago
Unfortunately, user namespaces are still not the default configuration with Docker (even though the core issues that made using them painful have long since been resolved).
Comment by easterncalculus 9 hours ago
Comment by CGamesPlay 6 hours ago
I disagree with other commenters here that Docker is not a security boundary. It's a fine one, as long as you don't disable the boundary, which is as easy as running a container with `--privileged`. I wrote about secure alternatives for devcontainers here: https://cgamesplay.com/recipes/devcontainers/#docker-in-devc...
Comment by flaminHotSpeedo 3 hours ago
The only serious company that I'm aware of which doesn't understand that is Microsoft, and the reason I know that is because they've been embarrassed again and again by vulnerabilities that only exist because they run multitenant systems with only containers for isolation
Comment by Nextgrid 10 hours ago
Are you holding millions of dollars in crypto/sensitive data? Better assume the machine and data is compromised and plan accordingly.
Is this your toy server for some low-value things where nothing bad can happen besides a bit of embarrassment even if you do get hit by a container escape zero-day? You're probably fine.
This attack is just a large-scale automated attack designed to mine cryptocurrency; it's unlikely any human ever actually logged into your server. So cleaning up the container is most likely fine.
Comment by ronsor 11 hours ago
Also, if you've been compromised, you may have a rootkit that hides itself from the filesystem, so you can't be sure of a file's existence through a simple `ls` or `stat`.
Comment by miladyincontrol 9 hours ago
Honestly, citation needed. Very rare unless you're literally giving the container access to write to /usr/bin or other binaries the host is running, to reconfigure your entire /etc, access to sockets like docker's, or some other insane level of over reach I doubt even the least educated docker user would do.
While of course they should be scoped properly, people act like some elusive 0-day container escape will get used on their minecraft server or personal blog that has otherwise sane mounts, non-admin capabilities, etc. You arent that special.
Comment by cyphar 4 hours ago
And a shocking number of tutorials recommend bind-mounting docker.sock into the container without any warning (some even tell you to mount it "ro" -- which is even funnier since that does nothing). I have a HN comment from ~8 years ago complaining about this.
Comment by Havoc 11 hours ago
Comment by minitech 11 hours ago
Comment by czbond 11 hours ago
Thanks for mentioning it - but now... how does one deal with this?
Comment by minitech 11 hours ago
* but if you’re used to bind-mounting, they’ll be a hassle
Edit: This is by no means comprehensive, but I feel compelled to point it out specifically for some reason: remember not to mount .git writable, folks! Write access to .git is arbitrary code execution as whoever runs git.
Comment by 3np 10 hours ago
You might still want to tighten things up. Just adding on the "rootless" part - running the container runtime as an unprivileged user on the host instead of root - you also want to run npm/node as unprivileged user inside the container. I still see many defaulting to running as root inside the container since that's the default of most images. OP touches on this.
For rootless podman, this will run as a user with your current uid and map ownership of mounts/volumes:
podman run -u$(id -u) --userns=keep-idComment by trhway 10 hours ago
non necessary vulnerability per. se. Bridged adapter for example lets you do a lot - few years ago there were a story of something like how a guy got a root in container and because the container used bridged adapter he was able to intercept traffic of an account info updates on GCP
Comment by TheRealPomax 11 hours ago
Comment by V__ 11 hours ago
Comment by necovek 10 hours ago
Imagine naming this executable "ls" or "echo" and someone having "." in their path (which is why you shouldn't): as long as you do "ls" in this directory, you've ran compromised code.
There are obviously other ways to get that executable to be run on the host, this just a simple example.
Comment by marwamc 10 hours ago
OTH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here.
The $HOME/.{aws,docker,claude,ssh}
Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Comment by tracker1 10 hours ago
Go and Rust tend to lend themselves to these more restrictive environments a bit better than other options.
Comment by Onavo 11 hours ago
Comment by mxxc 1 hour ago
Comment by jlengrand 32 minutes ago
Comment by broken_broken_ 41 minutes ago
Assume that the malware has replaced system commands, possibly used a kernel vulnerability to lie to you to hide its presence, so do not do anything in the infected system directly ?
Comment by dewey 36 minutes ago
Comment by croemer 8 hours ago
"CVE-2025-66478 - Next.js/Puppeteer RCE)"
Comment by loloquwowndueo 8 hours ago
Comment by themafia 6 hours ago
Comment by croemer 2 hours ago
Comment by grekowalski 11 hours ago
Comment by tgsovlerkhgsel 1 hour ago
If you have decent network or process level monitoring, you're likely to find it, while you might not realize the vulnerable software itself or some stealthier, more dangerous malware that might exploit it.
Comment by qingcharles 11 hours ago
Comment by dilippkumar 1 hour ago
But I am interested in the monero aspect here.
Should I treat this as some datapoint on monero’s security having held up well so far?
Comment by Hendrikto 55 minutes ago
If you used the server to mine Bitcoin, you would make approximately zero (0) profit, even if somebody else pays for the server.
But also yes, Monero has technically held up very well.
Comment by kachapopopow 2 hours ago
Comment by Aachen 1 hour ago
Search engines try to fight slop results with collateral damage mostly in small or even personal websites. Restaurants are happy to be on one platform only: Google Maps. Who needs an expensive website if you're on there and someone posts your menu as one of the pictures? (Ideally an old version so the prices seem cheaper and you can't be pinned down for false advertising.) Open source communities use Github, sometimes Gitlab or Codeberg, instead of setting up a Forgejo (I host a ton of things myself but notice that the community effect is real and also moved away from self hosting a forge). The cherry in top is when projects use Discord chats as documentation and bug reporting "form". Privacy people use Signal en masse, where Matrix is still as niche as it was when I first heard of it. The binaries referred to as open source just because they're downloadable can be found on huggingface, even the big players use that exclusively afaik. Some smaller projects may be hosted on Github but I have yet to see a self-hosted one. Static websites go on (e.g. Github) Pages and back-ends are put on Firebase. Instead of a NAS, individuals as well as small businesses use a storage service like Onedrive or Icloud. Some more advanced users may put their files on Backblaze B2. Those who newly dip their toes in self-hosting increasingly use a relay server to reach their own network, not because they need it but to avoid dealing with port forwarding or setting up a way to privately reach internal services. Security cameras are another good example of this: you used to install it, set a password, and forward the port so you can watch it outside the home. Nowadays people expect that "it just works" on their phone when they plug it in, no matter where they are. That this relies on Google/Amazon and that they can watch all the feeds is acceptable for the convenience. And that's all not even mentioning the death of the web: people who don't use websites anymore the way they were meant (as hyperlinked pages) but work with an LLM as their one-stop shop
Not that the increased convenience, usability, and thus universal accessibility of e.g. storage and private chats is necessarily bad, but the trend doesn't seem to me as you seem to think it is
I can't think of any example of something that became increasingly often self-hosted instead of less across the last 1, 5, or 10 years
If you see a glimmer of hope for the distributed internet, do share because I feel increasingly as the last person among my friends who hosts their own stuff
Comment by kachapopopow 40 minutes ago
I've been on the receiving end of attacks that were reported to be the size of more than 10tbps I couldn't imagine how I would deal with that if I didn't have a 3rd party providing such protection - it would require millions $$ a year just in transit contracts.
There is an increasing amount of software that attempts to reverse this, but as someone from https://thingino.com/ said: opensource is riddled with developers that died to starvation (nobody donates to opensource projects).
Comment by marwamc 10 hours ago
From the root container, depending on volume mounts and capabilities granted to the container, they would enumerate the host directories and find the names of common scripts and then overwrite one such script. Or to be even sneakier, they can append their malicious code to an existing script in the host filesystem. Now each time you run your script, their code piggybacks.
OTOH if I had written such a script for linux I'd be looking to grab the contents of $(hist) $(env) $(cat /etc/{group,passwd})... then enumerate /usr/bin/ /usr/local/bin/ and the XDG_{CACHE,CONFIG} dirs - some plaintext credentials are usually here. The $HOME/.{aws,docker,claude,ssh} Basically the attacker just needs to know their way around your OS. The script enumerating these directories is the 0777 script they were able to write from inside the root access container.
Comment by cobertos 6 hours ago
Deleting and remaking the container will blow away all state associated with it. So there isn't a whole lot to worry about after you do that.
Comment by jakelsaunders94 10 hours ago
Comment by bradley13 53 minutes ago
Comment by heavyset_go 11 hours ago
That said, do you have an image of the box or a container image? I'm curious about it.
Comment by jakelsaunders94 11 hours ago
I was lucky in that my DB backups were working so all my persistence wax backed up to S3. I think I could stand up another one in an hour.
Unfortunately I didn't keep an image no. I almost didn't have the foresight to investigate before yeeting the whole box into the sun!
Comment by muppetman 8 hours ago
Comment by cachius 1 hour ago
And isn’t it a design flaw if you can see all processes from inside a container? This could provide useful information for escaping it.
Comment by wnevets 11 hours ago
Comment by BLKNSLVR 10 hours ago
Re: the Internet.
Re: Peer-to-peer.
Re: Video streaming.
Re: AI.
Comment by lapetitejort 9 hours ago
Comment by BLKNSLVR 9 hours ago
Comment by nrhrjrjrjtntbt 11 hours ago
Comment by dylan604 11 hours ago
Comment by venturecruelty 11 hours ago
Comment by rendaw 57 minutes ago
Comment by CGamesPlay 5 hours ago
> Here’s the test. If /tmp/.XIN-unix/javae exists on my host, I’m fucked. If it doesn’t exist, then what I’m seeing is just Docker’s default behavior of showing container processes in the host’s ps output, but they’re actually isolated.
1. Files of running programs can be deleted while the program is running. If the program were trying to hide itself, it would have deleted /tmp/.XIN-unix/javae after it started. The nonexistence of the file is not a reliable source of information for confirming that the container was not escaped.
2. ps shows program-controlled command lines. Any program can change what gets displayed here, including the program name and arguments. If the program were trying to hide itself, it would change this to display `login -fp ubuntu` instead. This is not a reliable source of information for diagnosing problems.
It is good to verify the systemd units and crontab, and since this malware is so obvious, it probably isn't doing these two hiding methods, but information-stealing malware might not be detected by these methods alone.
Later, the article says "Write your own Dockerfiles" and gives one piece of useless advice (using USER root does not affect your container's security posture) and two pieces of good advice that don't have anything to do with writing your own Dockerfiles. "Write your own Dockerfiles" is not useful security advice.
Comment by 3np 4 hours ago
I actually think it is. It makes you more intimate with the application and how it runs, and can mitigate one particular supply-chain security vector.
Agreeing that the reasoning is confused but that particular advice is still good I think.
Comment by seymon 10 hours ago
And maybe updating container images with a mechanism similar to renovate with "minimumReleaseTime=7days" or something similar!?
Comment by elric 2 hours ago
Easier said than done, I know.
Podman makes it easier to be more secure by default than Docker. OpenShift does too, but that's probably taking things too far for a simple self hosted app.
Comment by movedx 10 hours ago
Then you need to run things with as least privilege as possible. Sadly, Docker and containers in general are an anti-pattern here because they’re about convenience first, security second. So the OP should have run the contains as read-only with tight resource limits and ideally IP restrictions on access if it’s not a public service.
Another thing you can do is use Tailscale, or something like it, to keep things being a zero trust, encrypted, access model. Not suitable for public services of course.
And a whole host of other things.
Comment by p0w3n3d 3 hours ago
$ sudo ufw default deny incoming
$ sudo ufw default allow outgoing
$ sudo ufw allow ssh
$ sudo ufw allow 80/tcp
$ sudo ufw allow 443/tcp
$ sudo ufw enable
As a user of iptables this order makes me anxious. I used to cut myself out from the server many times because first blocking then adding exceptions. I can see that this is different here as the last command commits the rules...Comment by tgsovlerkhgsel 2 hours ago
Comment by PlqnK 1 hour ago
But if they do have a vulnerability and manage to escape the sandbox then they will be root on your host.
Running your processes as an unprivileged user inside your containers reduces the possibility of escaping the sandbox, running your containers themselves as un unprivileged user (rootless podman or docker for example) reduces the attack surface when they manage to escape the sandbox.
Comment by aborsy 3 hours ago
You have to define a firewall policy and attach it to the VM.
Comment by spoaceman7777 3 hours ago
Yikes. I would still recommend a server rebuild. That is _not_ a safe configuration in 2025, whatsoever. You are very likely to have a much better engineered persistent infection on that system.
Comment by microtonal 1 hour ago
The right thing to do is to roll out a new server (you have a declarative configuration right?), migrate pure data (or better, get it from the latest backup), remove the attacked machine off the internet to do a full audit. Both to learn about what compromises there are for the future and to inform users of the IoT platform if their data has been breached. In some countries, you are even required by law to report breaches. IANAL of course.
Comment by hughw 9 hours ago
Comment by minitech 11 hours ago
/tmp/.XIN-unix/javae &
rm /tmp/.XIN-unix/javae
This article’s LLM writing style is painful, and it’s full of misinformation (is Puppeteer even involved in the vulnerability?).Comment by jakelsaunders94 11 hours ago
Comment by minitech 11 hours ago
Comment by sincerely 11 hours ago
Comment by croemer 8 hours ago
Comment by seafoamteal 11 hours ago
I'm glad you're up to writing more of your own posts, though! I'm right there with you that writing is difficult, and I've definitely got some posts on similar topics up on my site that are overly long and meandering and not quite good, but that's fine because eventually once I write enough they'll hopefully get better.
Here's hoping I'll read more from you soon!
Comment by jakelsaunders94 11 hours ago
I tried handwriting https://blog.jakesaunders.dev/schemaless-search-in-postgres/ bit I thought it came off as rambling.
Maybe I'll have a go at redrafting this tomorrow in non LLM-ese.
Comment by jakelsaunders94 11 hours ago
Comment by 3np 10 hours ago
> IT NEVER ESCAPED.
You haven't confirmed this (at least from the contents of the article). You did some reasonable spot checks and confirmed/corrected your understanding of the setup. I'd agree that it looks likely that it did not escape or gain persistence on your host but in no way have you actually verified this. If it were me I'd still wipe the host and set up everything from scratch again[0].
Also your part about the container user not being root is still misinformed and/or misleading. The user inside the container, the container runtime user, and whether container is privileged are three different things that are being talked about as one.
Also, see my comment on firewall: https://news.ycombinator.com/item?id=46306974
[0]: Not necessarily drop-everything-you-do urgently but next time you get some downtime to do it calmly. Recovering like this is a good excercise anyway to make sure you can if you get a more critical situation in the future where you really need to. It will also be less time and work vs actually confirming that the host is uncontaminated.
Comment by jakelsaunders94 9 hours ago
I'm going to sit down and rewrite the article and take a further look at the container tomorrow.
Comment by microtonal 1 hour ago
At any rate, this happening to you sucks! Hugs from a fellow HN user, I know that things like this can suck up a lot of time and energy. It’s courageous to write about such an incident incident, I think it’s useful to a lot of other people too, kudos!
Comment by 3np 9 hours ago
(And good to hear you're leaving the LLMs out of the writing next time <3)
Comment by Eduard 10 hours ago
Comment by LelouBil 6 hours ago
At least that's what I think happened because I never found out exactly how it was compromised.
The miner was running as root and it's file was even hidden when I was running ls ! So I didn't understand what was happening, it was only after restarting my VPS from with a rescue image, and after mounting the root filesystem, that I found out the file I was seeing in the processes list did indeed exist.
Comment by elif 7 hours ago
Even if you are an owasp member who reads daily vulnerability reports, it's so easy to think you are unaffected.
Comment by xp84 9 hours ago
Comment by hoppp 10 hours ago
Comment by christophilus 8 hours ago
Comment by egberts1 8 hours ago
It has since been fixed: Lesson learned.
Comment by exceptione 9 hours ago
Comment by doodlesdev 9 hours ago
Comment by exceptione 9 hours ago
But podman has also great integration with systemd. With that you could use a socket activated systemd unit, and stick the socket inside the container, instead of giving the container any network at all. And even if you want networking in the container, the podman folks developed slirp4netns, which is user space networking, and now something even better: passt/pasta.
Comment by crimsonnoodle58 6 hours ago
Also rootless docker does not bypass ufw like rootful docker does.
Comment by mos87 20 minutes ago
js scripts running on frameworks running inside containers
PS so I see the host ended up staying uncompromised
Comment by ryanto 10 hours ago
I know we aren't supposed to rely on containers as a security boundary, but it sure is great hearing stories like this where the hack doesn't escape the container. The more obstacles the better I guess.
Comment by DANmode 8 hours ago
If the human involved can’t escalate, the hack can’t.
Comment by tolerance 11 hours ago
Comment by jakelsaunders94 11 hours ago
Comment by dylan604 11 hours ago
Comment by venturecruelty 10 hours ago
Comment by qingcharles 11 hours ago
Comment by ianschmitz 11 hours ago
Comment by angulardragon03 10 hours ago
This became enough of a hassle that I stopped using them.
Comment by treesknees 10 hours ago
Comment by qingcharles 8 hours ago
Comment by jakelsaunders94 10 hours ago
But yeah it is massively overspecced. Makes me feel cool load testing my go backend at 8000 requests per second though!
Comment by spiderfarmer 10 hours ago
Comment by tgtweak 11 hours ago
Comment by pigbearpig 10 hours ago
Also could prevent something to exfiltrate sensitive data.
Comment by gppmad 1 hour ago
Comment by meisel 11 hours ago
Comment by jsheard 11 hours ago
Comment by tgtweak 10 hours ago
Comment by pixl97 10 hours ago
Comment by asdff 8 hours ago
Comment by beeflet 3 hours ago
Comment by rnhmjoj 10 hours ago
> RandomX utilizes a virtual machine that executes programs in a special instruction set that consists of integer math, floating point math and branches. > These programs can be translated into the CPU's native machine code on the fly (example: program.asm). > At the end, the outputs of the executed programs are consolidated into a 256-bit result using a cryptographic hashing function (Blake2b).
I doubt that you anyone managed to create an ASIC that does this more efficiently and cost effective than a basic CPU. So, no, probably no one is mining Monero using an ASIC.
Comment by heavyset_go 11 hours ago
Comment by edm0nd 11 hours ago
If they can enslave 100s or even 1000s of machine mining XMR for them, easy money if you set aside the legality of it.
Comment by minitech 11 hours ago
Comment by Bender 11 hours ago
Comment by justinsaccount 11 hours ago
Comment by zamadatix 11 hours ago
Comment by heavyset_go 11 hours ago
Comment by zamadatix 9 hours ago
E.g. on my systemd-nspawn setup with --private-users=pick (enables user namespacing) I created a container and gave it a bind mount. From the container it appears like files in the bind mount created by the container namespace's UID 0 are owned by UID 0 but from outside the container the same file looks owned by UID 100000. Inverted, files owned by the "real" UID 0 on the host look owned by 0 to the host but as owned by 65534 (i.e. "nobody") from the container's perspective. Breaking out of the container shouldn't inherently change the "actual" user of the process from 100000 to 0 any more than breaking out of the container as a non-0 UID in the first place - same as breaking out of any of the other namespaces doesn't make the "UID 0" user in the container turn into "UID 0" on the host.
Comment by heavyset_go 9 hours ago
They also expose kernel interfaces that, if exploited, can lead to the same.
In the end, namespaces are just for partitioning resources, using them for sandboxes can work, but they aren't really sandboxes.
Comment by eyberg 7 hours ago
b) if you want to limit your hosting environment to only the language/program you expect to run you should provision with unikernels which enforce it
Comment by mikaelmello 11 hours ago
Comment by dinkleberg 11 hours ago
Comment by venturecruelty 10 hours ago
>Edit: A few people on HN have pointed out that this article sounds a little LLM generated. That’s because it’s largely a transcript of me panicking and talking to Claude. Sorry if it reads poorly, the incident really happened though!
For what it's worth, this is not an excuse, and I still don't appreciate being fed undisclosed slop. I'm not even reading it.
Comment by Computer0 10 hours ago
Comment by movedx 10 hours ago
Comment by doublerabbit 9 hours ago
Next year is the 5th year of my current personal project. Ten to go.
Comment by OutOfHere 10 hours ago
Comment by guerrilla 12 hours ago
Comment by kopirgan 7 hours ago
Comment by scottyeager 4 hours ago
An inbound firewall can only help protect services that aren't meant to be reachable on the public internet. This service was exposed to the internet intentionally so a firewall wouldn't have helped avoid the breach.
The lesson to me is that keeping up with security updates helps prevent publicly exposed services from getting hacked.
Comment by kopirgan 4 hours ago
Comment by codegeek 11 hours ago
Is there ever a reason someone should run a docker container as root ?
Comment by d4mi3n 11 hours ago
Comment by nodesocket 10 hours ago
Comment by iLoveOncall 11 hours ago
Unless ran as root this could return file not found because of missing permissions, and not just because the file doesn't actually exist, right?
> “I don’t use X” doesn’t mean your dependencies don’t use X
That is beyond obvious, and I don't understand how anyone would feel safe from reading about a CVE on a widely used technology when they run dozens of containers on their server. I have docker containers and as soon as I read the article I went and checked because I have no idea what technology most are built with.
> No more Umami. I’m salty. The CVE was disclosed, they patched it, but I’m not running Next.js-based analytics anymore.
Nonsensical reaction.
Comment by qingcharles 11 hours ago
Nothing is immune. What analytics are you going to run? If you roll your own you'll probably leave a hole somewhere.
Comment by Hackbraten 10 hours ago
But kudos for the word play!
Comment by whalesalad 12 hours ago
Comment by mrkeen 12 hours ago
Comment by whalesalad 11 hours ago
Comment by venturecruelty 11 hours ago
Comment by zrn900 7 hours ago
Comment by j45 11 hours ago
Comment by palata 11 hours ago
Comment by j45 7 hours ago
Backend access trivial with Tailscale, etc.
Comment by palata 10 minutes ago
Cloudflare can certainly do more (e.g. protect against DoS and hide your personal IP if your server is at home).
Comment by sergsoares 10 hours ago
But that alone would not solve the problem being a RCE from HTTP, that is why edge proxy provider like Cloudflare[0] and Fastfy[1] proactivily added protections in his WAF products.
Even cloudflare had an outage trying to protect his customers[3].
- [0] https://blog.cloudflare.com/waf-rules-react-vulnerability/ - [1] https://www.fastly.com/blog/fastlys-proactive-protection-cri... - [2] https://blog.cloudflare.com/5-december-2025-outage/
Comment by cortesoft 10 hours ago
Comment by j45 7 hours ago
Backend access trivial with Tailscale, etc.
Public IP never needs to be used. You can just leave it an internal IP if you really want.
Comment by cortesoft 7 hours ago
Comment by mrkeen 11 hours ago
Comment by j45 7 hours ago
DNS is no issue. External DNS can be handled by Cloudflare and their waf. Their DNS service can can obsfucate your public IP, or ideally not need to use it at all with a Cloudflare tunnel installed directly on the server. This is free.
Backend access trivial with Tailscale, etc.
Public IP doesn't always need to be used. You can just leave it an internal IP if you really want.
Comment by miramba 11 hours ago
Comment by m00x 11 hours ago
I use them for self-hosting.
Comment by doublerabbit 9 hours ago
Comment by j45 7 hours ago
If you are using Cloudflare's DNS they can hide your IP on the dns record but it would still have to be locked down but some folks find ways to tighten that up too.
If you're using a bare metal server it can be broken up.
It's fair that it's a 3rd party's castle. At the same time until you know how to run and secure a server, some services are not a bad idea.
Some people run pangolin or nginx proxy manager on a cheap vps if it suits their use case which will securely connect to the server.
We are lucky that many of these ideas have already been discovered and hardened by people before us.
Even when I had bare metal servers connected to the internet, I would put a firewall like pfsense or something in between.
Comment by palata 4 minutes ago
If I run vulnerable software, it will still be vulnerable through a Cloudflare tunnel, right?
Genuinely interested, I'm always scared to expose things to the internet :-).
Comment by Carrok 11 hours ago
Comment by cortesoft 10 hours ago
Comment by venturecruelty 10 hours ago
Comment by sh3rl0ck 11 hours ago
Comment by j45 7 hours ago
Free way - sign up for a cloudflare account. Use the DNS on cloudflare, they wil put their public ip in front of your www.
Level 2 is install the cloudflare tunnel software on your server and you never need to use the public IP.
Backend access securely? Install Tailscale or headscale.
This should cover most web hosting scenarios. If there's additional ports or services, tools like nginx proxy manager (web based) or others can help. Some people put them on a dedicated VPS as a jump machine.
This way using the Public IP can almost be optional and locked down if needed. This is all before running a firewall on it.
Comment by iLoveOncall 11 hours ago
Comment by j45 7 hours ago
Comment by procaryote 11 hours ago
Comment by j45 7 hours ago
Keeping the IP secret seems like a misnomer.
Its often possible to lock down the public IP entirely to not accept connections except what's initiated from the inside (like the cloudflare tunnel or otherwise reaching out).
Something like a Cloudflare+tunnel on one side, tailscale or something to get into it on the other.
Folks other than me have written decent tutorials that have been helpful.