Show HN: Alien – Self-hosting with remote management (written in Rust)
Posted by alongub 18 hours ago
Hi HN, I'm Alon, and I'm building Alien, an open-source platform for deploying your software into your customer's environment and keeping it fully managed.
In my previous startup, I heard the same question from every single enterprise customer over and over again: "My data is sensitive. Can I deploy your product to my own cloud account?"
Self-hosting is becoming very popular because it lets users keep their data private, local, and inside their own environment. Unfortunately, self-hosting breaks down when someone starts paying for your software. Especially if it's an enterprise customer.
Customers usually don't actually know how to operate your software. They might change something small — Postgres version, environment variables, IAM, firewall rules — and things start failing. From their perspective, the product is broken. And even if the root cause is on their side, it doesn't matter... the customer is always right, you're still the one expected to fix it.
But you can't. You don't have access to their environment. You don't have real visibility. You can't run anything yourself. So you're stuck debugging a system you don't control, through screenshots and copy-pasted logs on a Zoom call. You end up responsible for something you don't control.
I think there's a better model of paid self-hosting: the software runs in the customer's environment, but the developer can actually operate it. It's a win-win: for the customer, their data stays private and local, and the developer still has control over deployments, updates, and debugging.
Alien provides infrastructure to deploy and operate software inside your users' environments, while retaining centralized control over updates, monitoring, and lifecycle management. It currently supports AWS, GCP, and Azure targets.
GitHub: https://github.com/alienplatform/alien
Getting started: https://alien.dev/docs/quickstart
How it works: https://alien.dev/docs/how-alien-works
Excited to share Alien with everyone here – let me know what you think!
Comments
Comment by nickmonad 15 hours ago
This is very real.
I work with a deployment that operates in this fashion. Although unfortunately, we can't maintain _any_ connection back to our servers. Pull or push, doesn't matter.
The goal right now is to build out tooling to export logs and telemetry data from an environment, such that a customer could trigger that export on our request, or (ideally) as part of the support ticketing process. Then our engineers can analyze async. This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information. Even IDs, file names, etc that only matter to customers.
Comment by alongub 14 hours ago
We're working on something for this! Stay tuned.
Comment by nodesocket 15 hours ago
Comment by lelanthran 1 hour ago
The people who know where to click and which dialog will pop up and when to click next are never going to agree to replace their non-automatable windows servers with fully automatable linux servers.
I mean, we're talking about a demographic that can't use ssh, never been on a platform using system package managers, and has little to no ability to version system changes.
They do all that manually.
Comment by alongub 14 hours ago
Comment by stronglikedan 11 hours ago
Comment by alongub 11 hours ago
Comment by jcgrillo 15 hours ago
This is fundamentally a data modeling problem. Currently computer telemetry data are just little bags of utf-8 bytes, or at best something like list<map<bytes, bytes>>. IMO this needs to change from the ground up. Logging libraries should emit structured data, conforming to a user supplied schema. Not some open-ended schema that tries to be everything to everyone. Then it's easy to solve both problems--each field is a typed column which can be compressed optimally, and marking a field as "safe" is something encoded in its type. So upon export, only the safe fields make it off the box, or out of the VPC, or whatever--note you can have a richer ACL structure than just "safe yes/no".
I applaud the industry for trying so hard for so long to make everything backwards compatible with the unstructured bytes base case, but I'm not sure that's ever really been the right north star.
Comment by quesera 13 hours ago
Stream-of-bytes is classically difficult model to escape. Many have tried.
Comment by jcgrillo 12 hours ago
Comment by quesera 11 hours ago
And to do it right (i.e. low-risk of of having it blow up with negative effects on the larger business goals), you need someone fairly experienced or maybe even specialized in that area. If you have that person, they are on the team because of their other skills, which you need more urgently.
SaaS, COTS, and open source monitoring tools have to cater to the existing customers. The sales pitch is "easy to integrate". So even they are not incentivized to build something new.
It boils down to the fact that stream-of-bytes is extremely well-understood, and almost always good enough. Infinitely flexible, low-ceremony, no patents, and comes preinstalled on everything (emitters and consumers). It's like HTTP in that way.
And the evolution is similar too. It'll always be stream-of-bytes, but you can emit in JSON or protobuf etc, if it's worth the cognitive overhead to do so. All the hyperscalers do this, even when the original emitter (web servers, etc) is just blindly spewing atrocious CLF/quirky-SSV text.
Comment by jcgrillo 9 hours ago
This is the crux of it. That's great until you encounter a need for a schema, and then it's "schema-on-read" or some similar abomination. And the need might not manifest until you're pushing like 1TB/day or more of telemetry data with hundreds or thousands of engineers working on some >1MLoC monstrosity. Hard to dig out of that hole.
The situation is tragically optimal--we've achieved some kind of multiobjective local maximum on a rock in the sewer at the bottom of a picturesque alpine valley and declared victory. We should do better.
Or maybe I'm overly optimistic.
Comment by quesera 6 hours ago
But it's a very comfortable rock. pointy in all the right places.
Comment by jcgrillo 6 hours ago
Comment by gsgreen 16 hours ago
Same VPS, same config, but under sustained load you’ll see latency creep or throughput drift depending on the host / routing / neighbors.
Short tests almost never show it — only shows up after a few minutes.
Comment by alongub 16 hours ago
Comment by msteffen 16 hours ago
Comment by alongub 16 hours ago
The metrics/logs part is also core to Alien... telemetry flows back to the vendor's control plane so you actually have visibility into what's running.
Comment by pruthviraja 13 hours ago
Comment by alongub 13 hours ago
If something fails mid-update, it resumes from exactly where it stopped. You can also point a deployment to a previous release and it walks back. This catches and recovers from issues that something like Terraform would just leave in a broken state.
For on-prem: we're working on Kubernetes as a deployment target (e.g. bare metal OpenShift)
Comment by rendaw 3 hours ago
There are specific things where that's not possible, and there are bugs, but it doesn't seem like what you said unless you meant that you just support a limited subset of resources that are known to be robust to reverts? But that's a fairly different claim.
Comment by alongub 3 hours ago
Alien tracks state at the individual API call level. A single resource creation might involve 5-10 API calls (create IAM role -> attach policy -> create function -> configure triggers -> set up DNS...). If it fails at step 7, it resumes from step 7. Terraform would retry the entire resource.
The other difference is that Alien runs continuously, not as a one-shot apply. It's a long-running control plane that watches the environment, detects drift, and reconciles. Terraform assumes you run it, it converges, and then nothing changes until you run it again.
Comment by pruthviraja 8 hours ago
Comment by alongub 4 hours ago
Comment by huksley 13 hours ago
At DollarDeploy we developing the platform to deploy apps to VMs with managed services provided, kind of like Vercel for your own servers. Would be interesting to try alien for enterprise customers.
Comment by alongub 13 hours ago
https://github.com/alienplatform/alien/blob/main/crates/alie... :)
Comment by mamcx 13 hours ago
Comment by nhatcher 14 hours ago
A different take: https://www.cloudron.io/
Comment by pixelbyindex 13 hours ago
It is intended to be simple: - with the power of a mac mini, you can host (almost) anything - pay for the mini, it is your machine to do with as you please (we will host it for you) - if you decide you no longer need hosting, we will mail you back the machine that rightfully belongs to you
if anyone is interested in becoming a partner, shoot me a message, felipe@ind3x.games
Comment by cassianoleal 16 hours ago
Comment by alongub 16 hours ago
Comment by cassianoleal 1 hour ago
The service provider has direct access to my infrastructure. It's one supply chain attack, one vulnerability, one missed code review away from data exfiltration or remote takeover.
Comment by dvirsegev 8 hours ago
Comment by munksbeer 14 hours ago
"Written in Rust" seems to be a very popular thing to add.
My assumption is that people know it will get the thread more visibility?
Comment by antonvs 15 hours ago
Realistically, the game ends up being - see what you can get away with until someone notices. Given that, you might want to rename the product to something more boring than “Alien”.
Comment by alongub 15 hours ago
More and more enterprise CISOs are starting to understand this.
The model here is closer to what companies like Databricks already do inside highly regulated environments. It's not new... it's just becoming more structured and accessible to smaller vendors.
Comment by OlivOnTech 15 hours ago
Comment by alongub 14 hours ago
Comment by nickmonad 14 hours ago
Comment by alongub 14 hours ago
Comment by nickmonad 13 hours ago
Comment by mrhottakes 15 hours ago
Comment by tanki 12 hours ago
Super cool product, I’ve gotta try it
Comment by EverMemory 8 hours ago