Systemd v259
Posted by voxadam 12 hours ago
Comments
Comment by sovietmudkipz 10 hours ago
In this way I’m able to set up AWS EC2 instances or digital ocean droplets, a bunch of game servers spin up and report back their existence to a backend game services API. So far it’s working but this part of my project is still in development.
I used to target containerizing my apps, which adds complexity, but often in AWS I have to care about VMs as resources anyways (e.g. AWS gamelift requires me to spin up VMs, same with AWS EKS). I’m still going back and forth between containerizing and using systemd; having a local stack easily spun up via docker compose is nice, but with systemd what I write locally is basically what runs in prod environment, and there’s less waiting for container builds and such.
I share all of this in case there’s a gray beard wizard out there who can offer opinions. I have a tendency to explore and research (it’s fuuun!) so I’m not sure if I’m on a “this is cool and a great idea” path or on a “nobody does this because <reasons>” path.
Comment by miladyincontrol 52 minutes ago
Why not both? Systemd allows you to make containers via nspawn, which are defined just about the exact same as you do a regular systemd service. Best of both worlds.
Comment by dijit 10 hours ago
You provide us a docker image, and we unpack it, turn it into a VM image and run as many instances as you want side-by-side with CPU affinity and NUMA awareness. Obviating the docker network stack for latency/throughput reasons - since you can
They had tried nomad, agones and raw k8s before that.
Comment by sovietmudkipz 9 hours ago
As a hobbyist part of me wants the VM abstracted completely (which may not be realistic). I want to say “here’s my game server process, it needs this much cpu/mem/network per unit, and I need 100 processes” and not really care about the underlying VM(s), at least until later. The closest thing I’ve found to this is AWS fargate.
Also holy smokes if you were a part of the team that architected this solution I’d love to pick your brain.
Comment by maccard 8 hours ago
At a previous job, we used azure container apps - it’s what you _want_ fargate to be. AIUI, Google Cloud Run is pretty much the same deal but I’ve no experience with it. I’ve considered deploying them as lambdas in the past depending on session length too…
Comment by gcr 2 hours ago
Comment by dijit 8 hours ago
By making it an “us” problem to run the infrastructure at a good cost, and be cheaper then than AWS for us to run, meaning we could take no profit on cloud vms. making us cost competitive as hell.
Comment by madjam002 9 hours ago
Comment by frantathefranta 8 hours ago
Comment by throwaway091025 8 hours ago
Comment by reactordev 8 hours ago
I’ve also done Microsoft Orleans clusters and still recommend the single pid, multiple containers/processes approach. If you can avoid Orleans and kubernetes and all that, the better. It just adds complexity to this setup.
Comment by esseph 9 hours ago
Comment by sovietmudkipz 9 hours ago
Comment by asmor 1 hour ago
Comment by esseph 8 hours ago
Still, I can see the draw for independent devs to use docker compose. Teams and orgs though makes sense to use podman and systemd for the smaller stuff or dev, and then literally export the config as a kubernetes yaml.
Comment by rbjorklin 9 hours ago
Comment by sovietmudkipz 9 hours ago
This all probably speaks to my odd prioritization: I want to understand and use. I’ve had to step back and realize part of the fun I have in pursuing these projects is the research.
Comment by baggy_trough 10 hours ago
Comment by sovietmudkipz 9 hours ago
Comment by nszceta 8 hours ago
https://adamgradzki.com/lightweight-development-sandboxes-wi...
Comment by panick21_ 9 hours ago
Comment by open-paren 9 hours ago
https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
Comment by sovietmudkipz 9 hours ago
Comment by bonzini 9 hours ago
(In fact, nothing prevents anyone from extracting and repackaging the sysvinit generator, now that I think of it).
Comment by colechristensen 9 hours ago
The closer you get to 100% resource utilization the more regular your workload has to become. If you can queue requests and latency isn't a problem, no problem, but then you have a batch process and not a live one (obviously not for games).
The reason is because live work doesn't come in regular beats, it comes in clusters that scale in a fractal way. If your long term mean is one request per second what actually happens is you get five requests in one second, three seconds with one request each, one second with two requests, and five seconds with 0 requests (you get my point). "fractal burstiness"
You have to have free resources to handle the spikes at all scales.
Also very many systems suffer from the processing time for a single request increasing as overall system loads increase. "queuing latency blowup"
So what happens? You get a spike, get behind, and never ever catch up.
https://en.wikipedia.org/wiki/Network_congestion#Congestive_...
Comment by sovietmudkipz 8 hours ago
Comment by mpyne 2 hours ago
The cycle time impact of variability of a single-server/single-queue system at 95% load is nearly 25x the impact on the same system at 75% load, and there are similar measures for other process queues.
As the other comment notes, you should really work from an assumption that 80% is max loading, just as you'd never aim to have a swap file or swap partition of exactly the amount of memory overcommit you expect.
Comment by rcxdude 1 hour ago
Comment by colechristensen 7 hours ago
The engineering time, the risks of decreased performance, and the fragility of pushing the limit at some point become not worth the benefits of reaching some higher metric of utilization. If it's not where you are, that optimum trade off point is somewhere.
Comment by anotherhue 11 hours ago
systemd-networkd now implements a resolve hook for its internal DHCP
server, so that the hostnames tracked in DHCP leases can be resolved
locally. This is now enabled by default for the DHCP server running
on the host side of local systemd-nspawn or systemd-vmspawn networks.
Hooray.localComment by nix0n 10 hours ago
All the services you forgot you were running for ten whole years, will fail to launch someday soon.
Comment by noosphr 9 hours ago
Comment by sidewndr46 8 hours ago
Comment by bonzini 6 hours ago
Also it's entirely contained within a program that creates systemd .service files. It's super easy to extract it in a separate project. I bet someone will do it very quickly if there's need.
Comment by sebazzz 10 hours ago
However, it is not easy figuring out which of those script are actually a SysVInit script and which simply wrap systemd.
Comment by bonzini 9 hours ago
Comment by nish__ 10 hours ago
Comment by bonzini 10 hours ago
Comment by nish__ 10 hours ago
Comment by nottorp 9 hours ago
Because last time I wrote systemd units it looked like a job.
Also, way over complex for anything but a multi user multi service server. The kind you're paid to maintain.
Comment by tapoxi 9 hours ago
Why wouldn't you want unit files instead of much larger init shell scripts which duplicate logic across every service?
It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Comment by throw0101a 18 minutes ago
The futzing around with resolv.conf(5) for one.
I take to setting the immutable flag on the file given all the shenanigans that "dynamic" elements of desktop-y system software does with the file when I want the thing to never change after I install the server. (If I do need to change something (which is almost never) I'll remove/re-add the flag via Anisble's file:attr.)
Of course nowadays "init system" now also means "network settings" for some reason, and I have often have to fight between system-networkd and NetworkManager on some distros: I was very happy with interfaces(5), also because once I set the thing on install on a server, I hardly have to change it and the dynamic-y stuff is an anti-feature.
SystemD as init replacement is "fine"; SystemD as kitchen-sink-of-the-server-with-everything-tightly-coupled can get annoying.
Comment by bonzini 9 hours ago
Indeed, that criticism makes no sense at all.
> It also enabled a ton of event driven actions which laptops/desktops/embedded devices use.
Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
Comment by throw0101a 6 minutes ago
> Don't forget VMs. Even in server space, they use hotplug/hotunplug as much as traditional desktops.
I was doing hot plugging of hardware awo+ decades ago when I still administered Solaris machines. IBM/mainframes has been doing it since forever.
Even on Linux udevd existed before systemd did.
Comment by yjftsjthsd-h 7 hours ago
The server and desktop have a lot more disk+RAM+CPU than the embedded device, to the point that running systemd on the low end of "just enough to run Linux" would be a pain.
Outside embedded, though, it probably works uniformly enough.
Comment by bigstrat2003 8 hours ago
Comment by 0x457 8 hours ago
TIL. Didn't know I can get paid to maintain my PC because I have a background service that does not run as my admin user.
Comment by nailer 8 hours ago
Fascinating. Last time I wrote a .service file I thought how muhc easier it was than a SysV init script.
Comment by jauntywundrkind 7 hours ago
[Service]
Type=simple
ExecStart=/usr/bin/my-service
If this is a hard job for you well maybe get another career mate. Especially now with LLMs.The thing to me is that services sometimes do have cause to be more complex, or more secure, or to be better managed in various ways. Over time we might find (for ex) oh actually waiting for this other service to be up and available first helps.
And if you went to run a service in the past, you never know what you are going to get. Each service that came with (for ex) Debian was it's own thing. Many forked off from one template or a other. But often forked long ago, with their own idiosyncratic threads woven in over time. Complexity emerged, and it wasn't contained, and it crrtainly wasn't normalized complexity across services: there would be dozens of services each one requiring careful staring at an init script to understand, with slightly different operational characteristics and nuance.
I find the complaints about systemd being complex almost always look at the problem in isolation. "I just want to run my (3 line) service, but I don't want to have to learn how systemd works & manages unit: this is complex!". But it ignores the sprawl of what's implied: that everyone else was out there doing whatever, and that you stumble in blind to all manners of bespoke homegrown complexity.
Systemd offers a gradient of complexity, that begins with extremely simple (but still offering impressive management and oversight), and that lets services wade into more complexity as they need. I think it is absolutely humbling and to some people an affront to see man pages with so so so many options, that it's natural to say: I don't need this, this is complex. But given how easy it is, how much great ability to see the state of the world we get that SysV never offered, given the standard shared culture tools and means, and given the divergent evolutionary chaos of everyone muddling through init scripts themselves, systemd feels vastly more contained, learnable, useful, concise, and less complex than the nightmares of old. And it has simple starting points, as shown at the top, that you can add onto and embelish onwards as you find cause to move further along the gradient of complexity, and you can do so in a simple way.
It's also incredibly awesome how many amazing tools for limiting process access, for sandboxing and securing services systemd has. The security wins can be enormous.
> Because last time I wrote systemd units it looked like a job
Last, an LLM will be able to help you with systemd, since it is common knowledge with common practice. If you really dislike having to learn anything.
Comment by ewoodrich 5 hours ago
Comment by nottorp 7 hours ago
Comment by A4ET8a8uTh0_v2 11 hours ago
Comment by MarkusWandel 7 hours ago
Probably no biggie to google the necessary copypasta to launch stuff from .service files instead. Which, being custom, won't have their timeout set back to "infinity" with every update. Unlike the existing rc.local wrapper service. Which, having an infinity timeout, and sometimes deciding that whatever was launched by rc.local can't be stopped, can cause shutdown hangs.
Comment by wpollock 9 hours ago
> Required minimum versions of following components are planned to be raised in v260:
* Linux kernel >= 5.10 (recommended >= 5.14),
Don't these two statements contradict each other?
Comment by blucaz 8 hours ago
Comment by throw0101d 11 hours ago
* https://en.wikipedia.org/wiki/Jamie_Zawinski#Zawinski's_Law
:)
Comment by Nextgrid 9 hours ago
Make an `smtp.socket`, which calls `smtp.service`, which receives the mail and prints it on standard output, which goes to a custom journald namespace (thanks `LogNamespace=mail` in the unit) so you can read your mail with `journalctl --namespace=mail`.
Comment by snvzz 4 hours ago
Breaking systemd was a thorn on distributions trying to use musl.
Comment by vaxman 9 hours ago
v259? [cue https://youtu.be/lHomCiPFknY]
Comment by Mikhail_K 11 hours ago
Comment by orangeboats 11 hours ago
Fine, we get it, you don't like him. Or you don't like systemd. Whichever it is, comments like yours often provide zero substance to the discussion.
Comment by nicolaslem 10 hours ago
Comment by Klonoar 2 hours ago
Comment by sam_lowry_ 10 hours ago
Comment by beanjuiceII 10 hours ago
Comment by McDyver 10 hours ago
Otherwise, at some point, one of the 10000 [0] won't know there are alternatives and different ways of doing things.
Comment by fleroviumna 10 hours ago
Comment by nottorp 9 hours ago