Show HN: Holos – QEMU/KVM with a compose-style YAML, GPUs and health checks
Posted by zeroecco 12 hours ago
I got tired of libvirt XML and Vagrant's Ruby/reload dance for single-host VM stacks, so I built a compose-style runtime directly on QEMU/KVM.
What's there: GPU passthrough as a first-class primitive (VFIO, OVMF, per-instance EFI vars), healthchecks that gate depends_on over SSH, socket-multicast L2 between VMs with no root and no bridge config, cloud-init wired through the YAML, Dockerfile support for provisioning.
What it's not: Kubernetes. No clustering, no live migration, no control plane. Single host. Prototype, but I'm running it on real hardware. Curious what breaks for people.
Comments
Comment by frabonacci 10 hours ago
Comment by zeroecco 9 hours ago
Comment by spidermonkey23 11 hours ago
Comment by zeroecco 11 hours ago
Comment by tiernano 12 hours ago
Comment by gmuslera 11 hours ago
Comment by johnny22 11 hours ago
Comment by zeroecco 11 hours ago
Comment by yjftsjthsd-h 9 hours ago
Comment by tadfisher 8 hours ago
Comment by zeroecco 7 hours ago
Comment by la64710 3 hours ago
Comment by imiric 11 hours ago
I built something similar recently on top of Incus via Pulumi. I also wanted to avoid libvirt's mountain of XML, and Incus is essentially a lightweight and friendlier interface to QEMU, with some nice QoL features. I'm quite happy with it, though the manifest format is not as fleshed out as what you have here.
What's nice about Pulumi is that I can use the Incus Terraform provider from a number of languages saner than HCL. I went with Python, since I also wanted to expose a unified approach to provisioning, which Pyinfra handles well. This allows me to keep the manifest simple, while having the flexibility to expose any underlying resource. I think it's a solid approach, though I still want to polish it a bit before making a public release.
Comment by khimaros 10 hours ago
Comment by imiric 1 hour ago
I took a slightly different approach in that I don't want to use YAML as the authoritative source. Many projects abuse it, and end up creating a DSL on top of it with all sorts of hacks to achieve the flexibility of a programming language. Pulumi and Pyinfra already provide user-friendly primitives and idempotent(ish) APIs that work much better than YAML. I simply want to expose some (opinionated) building blocks to make them easy to use, and allow users to customize them and add their own as needed. E.g. I definitely don't want to write any shell scripts inside YAML. :)
BTW, Pulumi already supports YAML[1], which can be used with any provider. But to me it's too verbose and generic, and of course, it lacks the provisioning primitives.
Comment by zeroecco 10 hours ago
Comment by d3Xt3r 10 hours ago
So would be nice if holos could replicate that docker/incus CLI functionality, like say "holos run -d --name db ubuntu:noble bash -c blah".
Comment by zeroecco 10 hours ago
Comment by zeroecco 7 hours ago
Comment by imiric 1 hour ago
That's true. But I didn't want to reinvent what Incus or any hypervisor abstraction does. I simply wanted to add some sugar on top that allows me to easily declare infra using small abstractions, and to tie in the provisioning aspect along the way. I still use Incus directly, and can benefit from their work, as you say. State is also managed by Pulumi, so really, there are 3 places for it to exist. There are some challenges with this, of course, but I think the tradeoff is worth it.
Good luck with your project, I'll be keeping an eye on it. I'll probably make a Show HN post when I release mine. Cheers!
Comment by ranger_danger 8 hours ago
Originally if you wanted e.g. a SoundBlaster16 device you could use -device sb16. Then it changed to -soundhw sb16. And now it's -audio driver=none,model=sb16; this has been happening with several different classes of options over the years and I haven't found any good documentation of all the differences in one place. If anyone knows, I'd appreciate it.
Comment by khimaros 10 hours ago