Migrating from DigitalOcean to Hetzner
Posted by yusufusta 3 hours ago
Comments
Comment by mariopt 1 hour ago
You saved a lot of money but you'll spend a lot of time in maintenance and future headaches.
Comment by grey-area 58 minutes ago
Sometimes it's completely acceptable that a server will run for 10 years with say 1 week or 1 month of downtime spread over those 10 years, yes. That's the sort of uptime you can see with single servers that are rarely changed and over-provisioned as many on Hetzner are. Some examples:
Small businesses where the website is not core to operations and is more of a shop-front or brochure for their business.
Hobby websites too don't really matter if they go down for short periods of time occasionally.
Many forums and blogs just aren't very important too and downtime is no big deal.
There are a lot of these websites, and they are at the lower end of the market for obvious reasons, but probably the majority of websites in fact, the long tail of low-traffic websites.
Not everything has to be high availability and if you do want that, these providers usually provide load balancers etc too. I think people forget here sometimes that there is a huge range in hosting from squarespace to cheap shared hosting to more expensive self-hosted and provisioned clouds like AWS.
Comment by rzz3 37 minutes ago
Comment by grey-area 26 minutes ago
But I do agree the poster should think about this. I don't think it's 'off' or misleading, they just haven't encountered a hardware error before. If they had one on this single box with 30 databases and 34 Nginx sites it would probably be a bad time, and yes they should think about that a bit more perhaps.
They describe a db slave for cutover for example but could also have one for backups, plus rolling backups offsite somewhere (perhaps they do and it just didn't make it into this article). That would reduce risk a lot. Then of course they could put all the servers on several boxes behind a load-balancer.
But perhaps if the services aren't really critical it's not worth spending money on that, depends partly what these services/apps are.
Comment by BorisMelnik 25 minutes ago
Comment by grey-area 18 minutes ago
Comment by chairmansteve 25 minutes ago
Comment by j45 27 minutes ago
Also, in general, you can architect your application to be more friendly to migration. It used to be a normal thing to think about and plan for.
VMware has a conversion tool that converts bare metal into images.
One could image, then do regular snapshots, maybe centralize a database being accessed.
Sometimes it's possible to create a migration script that you run over and over to the new environment for each additional step.
Others can put a backup server in between to not put a load on the drive.
Digital Ocean makes it impossible to download your disk image backups which is a grave sin they can never be forgiven for. They used to have some amount of it.
Still, a few commands can back up the running server to an image, and stream it remotely to another server, which in turn can be updated to become bootable.
This is the tip of the iceberg in the number of tasks that can be done.
Someone with experience can even instruct LLMs to do it and build it, and someone skilled with LLMs could probably work to uncover the steps and strategies for their particular use case.
Comment by j45 34 minutes ago
This is a general response to it.
I have run hosting on bare metal for millions of users a day. Tens of thousdands of concurrent connections. It can scale way up by doing the same thing you do in a cloud, provision more resources.
For "downtime" you do the same thing with metal, as you do with digital ocean, just get a second server and have them failover.
You can run hypervisors to split and manage a metal server just like Digital Ocean. Except you're not vulnerable to shared memory and cpu exploits on shared hosting like Digital Ocean. When Intel CPU or memory flaws or kernel exploits come out like they have, one VM user can read the memory and data of all the other processes belonging to other users.
Both Digital ocean, and IaaS/PaaS are still running similar linux technologies to do the failover. There are tools that even handle it automatically, like Proxmox. This level of production grade fail over and simplicity was point and click, 10 years ago. Except no one's kept up with it.
The cloud is convenient. Convenience can make anyone comfortable. Comfort always costs way more.
It's relatively trivial to put the same web app on a metal server, with a hypervisor/IaaS/Paas behind the same Cloudflare to access "scale".
Digital Ocean and Cloud providers run on metal servers just like Hetzner.
The software to manage it all is becoming more and more trivial.
Comment by grey-area 19 minutes ago
Comment by jijijijij 21 minutes ago
Comment by Aurornis 8 minutes ago
That's a strawman version of what happens.
There have been times when I've tried to visit a webshop to buy something but the site was broken or down, so I gave up and went to Amazon and bought an alternative.
I've also experienced multiple business situations where one of our services went down at an inconvenient time, a VP or CEO got upset, and they mandated that we migrate away from that service even if alternatives cost more.
If you think of your customers or visitors as perfectly loyal with infinite patience then downtime is not a problem.
> Unless you are Amazon and every minute costs you bazillions, you are likely gonna get the better deal not worrying about availability and scalability. That 250€/m root server is a behemoth. Complete overkill for most anything.
You don't need every minute of downtime to cost "bazillions" to justify a little redundancy. If you're spending 250 euros/month on a server, spending a little more to get a load balancer and a pair of servers isn't going to change your spend materially. Having two medium size servers behind a load balancer isn't usually much more expensive than having one oversized server handling it all.
There are additional benefits to having the load balancer set up for future migrations, or to scale up if you get an unexpected traffic spike. If you get a big traffic spike on a single server and it goes over capacity you're stuck. If you have a load balancer and a pair of servers you can easily start a 3rd or 4th to take the extra traffic.
Comment by Aurornis 46 minutes ago
The confusing part about this article is the emphasis on a zero-downtime migration toward a service that isn't really ideal for uptime. It wouldn't be that expensive to add a little bit of architecture on the Hetzner side to help with this. I guess if you're doing a migration and you're paid salary or your time is free-ish, doing the migration in a zero downtime way is smart. It's a little funny to see the emphasis on zero downtime juxtaposed to the architecture they chose where uptime depends on nothing ever failing
Comment by j45 21 minutes ago
Clever architecture will always beat cleverly trying to pick only one cloud.
Being cloud agnostic is best.
This means setting up a private cloud.
Hosted servers, and managed servers are perfectly capable of near zero downtime. this is because it's the same equipment (or often more consumer grade) that the "cloud" works on and plans for even more failure.
Digital Ocean definitely does not guarantee zero downtime. That's a lot of 9's.
It's simple to run well established tools like Proxmox on bare metal that will do everything Digital Ocean promises, and it's not susceptible to attacks, or exploits where the shared memory and CPU usage will leak what customers believe is their private VPS.
Nothing ever failing in the case of a tool like Proxmox is, install it on two servers, one VPS exists on both nodes (you connect both servers as nodes), click high availability, and it's generally up and running. Put cloudflare in front of it like the best preference practices of today.
If you're curious about this, there's some pretty eye opening and short videos on Proxmox available on Youtube that are hard to unsee.
Comment by chillfox 39 minutes ago
Also, don't underestimate the reliability of simplicity.
I was a Linux sysadmin for many years, and I have never seen as much downtime from simpler systems as I routinely see from the more complicated setups. Somewhere between theory and reality, simpler systems just comes out ahead most of the time.
Comment by wiether 43 minutes ago
Usually those articles describe two situations:
- they were "on the cloud" for the wrong reasons and migrating to something more physical is the right approach
- they were "on the cloud" for the right reasons and migrating to something more physical is going to be a disaster
Here they appear to be in the first situation.
If their setup was running fine on DO and they put the right DR policies in place at Hetzner, they should be fine.Comment by daneel_w 55 minutes ago
Comment by VorpalWay 29 minutes ago
As a bonus, Hetzner is European.
Comment by jdboyd 10 minutes ago
If someone starts thinking about redundancy and load balancers than DO's solution is rent a second similar sized droplet, and then add their load balancing service. If you do those things with Hetzner instead, you would still be spending less than you did with Digital Ocean.
Personally, what is keeping me on DO is that no single droplet I have is large enough to justify moving on its own, and I'm not prepared to deal with moving everything.
Comment by pinkgolem 5 minutes ago
If your scaling need is not that high, you can get very far with a single server
Comment by chalmovsky 44 minutes ago
Comment by wg0 15 minutes ago
Not a bad tradeoff for 99.8% of shops out there.
Comment by ozim 25 minutes ago
I know people like FAANG LARPing. Not everyone has budget or need to run four nines with 24/7 and FAANG level traffic.
Comment by neya 16 minutes ago
Comment by BorisMelnik 26 minutes ago
Comment by timwis 1 hour ago
Comment by Gud 20 minutes ago
Comment by PunchyHamster 45 minutes ago
If you can tolerate few hours of downtime and some data rollback/loss, single server + robust backups can be viable strategy
Comment by jgalt212 18 minutes ago
Comment by NicoJuicy 33 minutes ago
Deploying a new docker instance or just restoring the app from a snapshot and restoring the latest db in most cases is enough.
Comment by antirez 2 hours ago
Comment by rustyhancock 1 hour ago
How deep does this go?
Comment by sph 1 hour ago
I know your comment is tongue-in-cheek and the poster here is kinda known, but this kind of astroturfing is a new low and it's everywhere on forums such as these.
Comment by Aurornis 44 minutes ago
It's too bad Reddit allows accounts to hide their comment history now. That was an easy way to identify bot accounts before they started allowing accounts to hide their post history
Comment by rdevilla 1 hour ago
Comment by sph 32 minutes ago
I'm not. I stick around for the popcorn, and I'm not gonna miss the schadenfreude in a few years.
Comment by MikeNotThePope 52 minutes ago
Comment by dwedge 20 minutes ago
Comment by Bridged7756 41 minutes ago
Comment by refulgentis 1 hour ago
Just noting for fellow just-waking-up people
Comment by jnwatson 1 hour ago
Comment by tmpz22 1 hour ago
Comment by Bridged7756 38 minutes ago
Comment by rpcope1 44 minutes ago
Comment by antirez 1 hour ago
So it's a Claude ad inside a Hetzner ad inside a decent grammar ad.
Comment by airstrike 47 minutes ago
Comment by brianwawok 54 minutes ago
Comment by mirekrusin 1 hour ago
Comment by FEELmyAGI 1 hour ago
Btw this type of grammar error can be found by proofreading your posts with ChatGPT powered OpenClaw assistant.
Comment by senordevnyc 1 hour ago
Comment by qudat 31 minutes ago
What’s exciting is how simple cli tools can be so impactful to dev workflows
Comment by tannhaeuser 1 hour ago
Comment by sph 1 hour ago
Comment by pedrosorio 34 minutes ago
https://en.wikipedia.org/wiki/Salvatore_Sanfilippo
This whole thread is hilarious.
Comment by nutjob2 36 minutes ago
Comment by cyanydeez 57 minutes ago
Comment by m00dy 1 hour ago
Comment by adamcharnock 1 hour ago
For backups we use both Velero and application-level backup for critical workloads (i.e. Postgres WAL backups for PITR). We also ensure all state is on at least two nodes for HA.
We also find bare metal to be a lot more performant in general. Compared to AWS we typically see service response times halve. It is not that virtualisation inherently has that much overhead, rather it is everything else. Eg, bare metal offers:
- Reduced disk latency (NVMe vs network block storage)
- Reduced network latency (we run dedicated fibre, so inter-az is about 1/10th the latency)
- Less cache contention, etc [1]
Anyway, if you want to chat about this sometime just ping me an email: adam@ company domain.
[1] I wrote more on this 6 months ago: https://news.ycombinator.com/item?id=45615867
Comment by brianwawok 52 minutes ago
My entire stack is.. k8s, hosted Postgres, s3 type storage. I can always host my own Postgres. So really down to k8s and s3. I think hetzner has some kind of s3 storage but haven’t looked into, and I assume moving in 100 TB is a process….
Comment by traceroute66 41 minutes ago
Your post was reasonable until the spam tagline.
Not cool.
Comment by _el1s7 5 minutes ago
> 30 MySQL databases (248 GB of data)
> 34 Nginx virtual hosts across multiple domains
> GitLab EE (42 GB backup)
> Neo4J Graph DB (30 GB graph database)
> Supervisor managing dozens of background workers
> Gearman job queue
> Several live mobile apps serving hundreds of thousands of users
He's doing all of that on a single server?!I'm not against vertical scaling and stuff, but 30 db instances in one server is just crazy.
Comment by leetrout 2 minutes ago
They didn't say that and the article didn't allude to that. 1 instance with 30 databases.
Comment by largbae 2 hours ago
I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).
I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
Comment by dividuum 2 hours ago
Comment by nine_k 1 hour ago
One is about all the steps of zero downtime migration. It's widely applicable.
The other is the decision to replace a cloud instance with bare metal. It saves a lot in costs, but also the loss of fast failover and data backups is priced in.
If I were doing this, I would run a hot spare for an extra $200, and switched the primary every few days, to guarantee that both copies work well, and the switchover is easy. It would be a relatively low price for a massive reduction of the risk of a catastrophic failure.
Comment by dangero 31 minutes ago
Comment by faangguyindia 1 hour ago
i hardly ever visit their website, everything from terminal.
Comment by andai 1 hour ago
Comment by rmunn 1 hour ago
Comment by locknitpicker 2 hours ago
You're describing Hetzner Cloud, which has been like this for many years. At least 6.
Hetzner also offers Hetzner Cloud API, which allows us to not have to click any button and just have everything in IaC.
Comment by petesergeant 1 hour ago
Comment by Doohickey-d 2 hours ago
Because with a single-server setup like this, I'd imagine that hardware (e.g. SSD) failure brings down your app, and in the case of SSD failure, you then have hours or days downtime while you set everything up again.
Comment by kro 2 hours ago
Once the first SSD fails after some years, and your monitoring catches that, you can either migrate to a new box, find another intermediate solution/replica, or let them hotswap it while the other drive takes on.
Of course though, going to physical servers loses redundency of the cloud, but that's something you need to price in when looking at the savings and deciding your risk model.
And yes, running this without also at least daily snapshotting/backup to remote storage is insane - that applies to cloud aswell, albeit easier to setup there.
Comment by linsomniac 1 hour ago
For quite a while we ran single power supplies because they were pretty high quality, but then Supermicro went through a ~6 month period where basically every power supply in machines we got during that time failed within a year, and replacements were hard to come by (because of high demand, because of failures), and we switched to redundant. This was all cost savings trade-offs. When running single power supplies, we had in-rack Auto Transfer Switches, so that the single power supplies could survive A or B side power failure.
But, and this is important, we were monitoring the systems for drive failures and replacing them within 24 hours. Ditto for power supplies. If you don't monitor your hardware for failure, redundancy doesn't mean anything.
Comment by traceroute66 2 hours ago
Yeah. This blog post reads like it was written by someone who didn't think things through and just focused on hyper-agressive cost-cutting.
I bet their DigitalOcean vm did live migrations and supported snapshots.
You can get that at Hetzner but only in their cloud product.
You absolutely will not get that in Hetzner bare-metal. If your HD or other component dies, it dies. Hetzner will replace the HD, but its up to you to restore from scratch. Hetzner are very clear about this in multiple places.
Comment by treesknees 2 hours ago
Comment by traceroute66 2 hours ago
They could, but they didn't and instead they wrote that blog post which, even being generous is still kinda hard to avoid describing as misleading.
I would not have written the post I did if they had presented a multi-node bare-metal cluster or whatever more realistic config.
Comment by locknitpicker 1 hour ago
What do you feel was misleading?
Comment by wiether 38 minutes ago
They don't.
And reading the article, they don't seem to understand that.
Comment by traceroute66 38 minutes ago
Erm. I already spelt it out in my original post ?
I'm not going to re-write it, the TL;DR is they are making an Apples and Oranges comparison.
Yes they "saved money" but in no way, shape or form are the two comparable.
The polite way to put is is .... they saved as much money as they did because they made very heavy handed "architectural decisions". "Decisions" that they appear to be unaware of having made.
Comment by Someone1234 1 hour ago
I agree with the other poster, this is fine for a toy site or sites but low quality manual DR isn't good for production.
Comment by daneel_w 51 minutes ago
Comment by faangguyindia 1 hour ago
Comment by andai 1 hour ago
Curious what the delta to pain-in-ass would be if I want to deal with storing data. (And not just backups / migrations, but also GDPR, age verification etc.)
Comment by faangguyindia 1 hour ago
i already design with Auto Scale Group in mind, we run it in spot instance which tend to be much cheaper. Spot instances can be reclaimed anytime, so you need to keep this is kind.
I also have data blobs which are memory maped files, which are swapped with no downtime by pulling manifest from GCS bucket each hour, and swapping out the mmaped data.
i use replicas, with automatic voting based failover.
I've used mongo with replication and automative failover for a decade in production with no downtime, no data lost.
Recently, got into postgres, so far so good. Before that i always used RDS or other managed solution like Datastore, but they cost soo much compared to running your own stuff.
Healthchecks start new server in no time, even if my Hertzner server goes out or if whole Hertzer goes out, my system will launch digital ocean nodes which will start soaking up all requests.
Comment by hnthrow0287345 2 hours ago
Comment by acdha 1 hour ago
Comment by wat10000 1 hour ago
Comment by faangguyindia 1 hour ago
Recently, I did it in PostgreSQL using pg_auto_failover. I have 1 monitor node, 1 primary, and 1 replica.
Surprisingly, once you get the hang of PostgreSQL configuration and its gotchas, it’s also very easy to replicate.
I’m guessing MySQL is even easier than PostgreSQL for this.
I also achieved zero downtime migration.
Comment by acdha 1 hour ago
Comment by kijin 2 hours ago
Not every app needs 24/7 availability. The vast majority of websites out there will not suffer any serious consequences from a few hours of downtime (scheduled or otherwise) every now and then. If the cost savings outweigh the risk, it can be a perfectly reasonable business decision.
A more interesting question would be what kind of backup and recovery strategy they have, and which aspects of it (if any) they had to change when they moved to Hetzner.
Comment by onetimeusename 1 hour ago
Comment by xtracto 1 hour ago
Comment by therealmarv 1 hour ago
I wish the industry would adopt more zero knowledge methods in this regards. They are existing and mathematically proven but it seems there is no real adoption.
- OpenAI wants my passport when topping up 100 USD
- Bolt wanted recently my passport number to use their service
- Anthropic seems wants to have passports for new users too
- Soon age restriction in OS or on websites
I wished there would be a law (in Europe and/or US) to minify or forbid this kind of identity verification.
I want to support the companies to not allow misuse of their platforms, at the same time my full passport photo is not their concern, especially in B2B business in my opinion.
Comment by pmdr 1 hour ago
Comment by OneMorePerson 55 minutes ago
The only possible non legally driven reason I can think of would be if they think the tradeoff of extra friction (and lost customers) is more than offset by fraud protection efforts. This seems unlikely cause I don't see how that math could have changed in the last few years.
Comment by cyanydeez 36 minutes ago
It's bad enough living in America without the rest of the world adopting the grift economy.
Comment by uxcolumbo 1 hour ago
Absolutely no to this - reason enough to go with AWS or alternatives. And why are ppl willingly giving it to hosting provider?
Unnecessarily exposing yourself to identity theft if they get compromised.
Comment by acdha 1 hour ago
If Hetzner allows you to host something and you use it for illegal acts, they aren’t going to jail to shield you for €10/month.
Comment by zaptheimpaler 1 hour ago
Comment by ciex 28 minutes ago
Comment by Strom 11 minutes ago
As I understand it, they ask only from accounts that check several boxes for common cases of abuse. So basically, personal accounts (as opposed to business accounts) from poor countries (by per capita, so e.g. India qualifies as poor).
Comment by goobatrooba 1 hour ago
Not sure what differs in our cases, I'm based in EU.
Comment by faangguyindia 1 hour ago
Comment by pennomi 2 hours ago
Comment by acdha 1 hour ago
Comment by alternatex 40 minutes ago
AWS and Azure a charging an arm and a leg, but the offered quality is mostly perceived. Most of the bits and bobs they charge for are not providing much value for a vast majority of businesses. I won't even go over the complete lack of ergonomics with their portals.
Comment by DaedalusII 24 minutes ago
and mercedes is just like aws in dumb charges. new tires, EUR1000+ for set. replace car keys? EUR1000+
Comment by rpcope1 6 minutes ago
I see you've never actually owned or worked on a German car, especially in relation to even modest Japanese models. Maybe they were a little nicer inside in the 80s and maybe 90s, but "German car" and frankly "European make" is basically synonymous with "big expensive pile of shit that's an expensive pain in the ass when things start falling apart (which they seem to with increasing rapidity)." It's like the disease that plagued British cars for the longest time got contaminated with the German propensity to build overly complex monstrosities.
Comment by PunchyHamster 40 minutes ago
Comment by subscribed 1 hour ago
Sure, it cost me £6/mo to serve ONE lambda on AWS (and perhaps 500 requests per month). Sure it was awesome and "proper". But crazy expensive.
I host it now (and 5 similar things) for free on Cloudflare.
But if you need what AWS provides, you'll get that. And that means sometimes it's not the most cost-effective place.
Comment by wiether 31 minutes ago
> Sure, it cost me £6/mo to serve ONE lambda on AWS (and perhaps 500 requests per month)
I went on pricing calculator, and to arrive at $6/mo with only 500 requests, you'd need to run the lambda for 15 minutes with 2Gb of RAM.On the other hand, we have dozens production workloads on Lambda handling thousands of requests daily and we spend like $50/mo on Lambda.
I'm really intrigued by what you did to get to those figures!
Comment by steve1977 2 hours ago
Comment by faangguyindia 1 hour ago
Comment by richwater 1 hour ago
Comment by faangguyindia 1 hour ago
Comment by rolymath 54 minutes ago
Cloud used to be marketed for scalability. "Netflix can scale up when people are watching, and scale down at night".
Then the blogosphere and astroturfing got everyone else on board. How can $5 on amazon get you less than what you got from almost any VPS (VDS) provider 10 years ago?
Comment by delfinom 2 hours ago
Recently we had several of our VMs offline because they apparently have these large volume storage pools they were upgrading and suddenly disks died in two large pools. It took them 3 days to resolve.
Hetzner has no integrated option to backup volumes and its roll your own :/ You also can't control volume distribution on their storage nodes for redundancy.
Comment by nixpulvis 2 hours ago
Comment by Silhouette 1 hour ago
Comment by echelon 2 hours ago
It's worse than Oracle and they don't even use lawyery contracts.
The technology itself is the tendrils.
Comment by koolba 30 minutes ago
What was the config on the receiving side to support this? Did you whitelist the old server IP to trust the forwarding headers? Otherwise you’d get the old server IP in your app logs. Not a huge deal for an hour but if something went wrong it can get confusing.
Comment by DaedalusII 29 minutes ago
you can basically go on hetzner and spin up a vps with linux that is exposed to the open internet with open ports and user security and within a few hours its been hacked, there is no like warning pop up that says "if you do this your server will be pwnd"
i especialy wonder with all the ai provisioned vps and postgres dbs what will happen here
Comment by xuki 2 hours ago
Comment by pmdr 1 hour ago
Comment by readyforbrunch 50 minutes ago
So a near 44% price reduction for a 50% reduction in only one of the components. Looks like progression to me.
Comment by pellepelster 2 hours ago
The issue is though, that you loose the managed part of the whole Cloud promise. For ephemeral services this not a big deal, but for persistent stuff like databases where you would like to have your data safe this is kind of an issue because it shifts additional effort (and therefore cost) into your operations team.
For smaller setups (attention shameless self-promotion incoming) I am currently working on https://pellepelster.github.io/solidblocks/cloud/index.html which allows to deploy managed services to the Hetzner Cloud from a Docker-Compose like definition. E.g. a PostgreSQL database with automatic backup and disaster recovery.
Comment by apitman 2 hours ago
They do offer VPS in the US and the value is great. I was seriously looking at moving our academic lab over from AWS but server availability was bad enough to scare me off. They didn't have the instances we needed reliably. Really hoping that calms down.
Comment by igtztorrero 1 hour ago
Comment by dessimus 1 hour ago
Comment by phamilton 1 hour ago
Namely, all remote access (including serving http) must managed by a major player big enough to be part of private disclosure (e.g. Project Glasswing).
That doesn't mean we have to use AWS et al for everything, but some sort of zero trust solution actively maintained by one of them seems like the right path. For example, I've started running on Hetzner with Cloudflare Tunnels.
Anyone else doing something similar?
Comment by locknitpicker 1 hour ago
How much latency does this add?
Comment by gbro3n 1 hour ago
Comment by raphinou 1 hour ago
Comment by leros 17 minutes ago
I've spent time eating the costs of things like DigitalOcean or SaaS products because my time is better spent growing my revenue than reducing infrastructure costs. But at some point, costs can grow large enough that's it's worthwhile to shift focus to reducing infrastructure spend.
Comment by utopiah 1 hour ago
Comment by neeraga 35 minutes ago
Comment by nixpulvis 2 hours ago
Comment by electroly 2 hours ago
Anyone who thinks DO and Hetzner dedicated servers are fungible products is making a mistake. These aren't the same service at all. There are savings to be had but this isn't a direct "unplug DO, plug in Hetzner" situation.
Comment by joefourier 1 hour ago
Although since they were running a LEMP server stack manually and did their migration by copying all files in /var/www/html via rsync and ad-hoc python scripts, even a DO droplet doesn't have the best guarantee. Their lowest-hanging fruit is probably switching to infrastructure as code, and dividing their stack across multiple cheaper servers instead of having a central point of failure for 34 applications.
Comment by bingo-bongo 2 hours ago
One of the new risks is if anything critical happens with the hardware, network, switch etc. then everything is down, until someone at Hetzner go fixes it.
With a virtual server it’ll just get started on a different server straight away. Usually hypervisors also has 2 or more network connections etc.
And hopefully they also got some backup setup.
It’s still a huge amount of of savings and I’d probably do the same of I were in their shoes, but there is tradeoffs when going from virtual- to dedicated hardware.
Comment by missedthecue 1 hour ago
Comment by spaniard89277 1 hour ago
Comment by traceroute66 2 hours ago
As the other person already said here, this blog post comparison is skewed.
BUT
EU cloud providers are much better value for money than the US providers.
The US providers will happily sit there nickle and diming you, often with deliberately obscure price sheets (hello AWS ;).
EU cloud provider pricing is much clearer and generally you get a lot more bang for your buck than you would with a US provider. Often EU providers will give you stuff for free that US providers would charge you for (e.g. various S3 API calls).
Therefore even if this blog post is skewed and incorrect, the overall argument still stands that you should be seriously looking at Hetzner or Upcloud or Exoscale or Scaleway or any of the other EU providers.
In addition there is the major benefit of not being subject to the US CLOUD and PATRIOT acts. Which despite what the sales-droids will tell you, still applies to the fake-EU provided by the US providers.
Comment by talkingtab 1 hour ago
Comment by nickandbro 1 hour ago
My foray into multiplayer games.
Comment by wouldbecouldbe 2 hours ago
Comment by PunchyHamster 43 minutes ago
So they did same mistake all over again. Debian or Ubuntu would just be upgrade-in-place and migrate
Comment by hnarn 42 minutes ago
Comment by PunchyHamster 3 minutes ago
Comment by rawoke083600 1 hour ago
And i say it every time they came up: Their cloud UX is brilliant and simple! Compared to the big ones out there.
Comment by JSR_FDED 2 hours ago
Asking the obvious question: why not your own server in a colo?
Comment by preinheimer 2 hours ago
The problem with actually owning hardware is that you need a lot of it, and need to be prepared to manage things like upgrading firmware. You need to keep on top of the advisories for your network card, the power unit, the enterprise management card, etc. etc. If something goes wrong someone might need to drive in and plug in a keyboard.
Eventually we admitted to ourselves we didn't want those problems.
Comment by klodolph 2 hours ago
Comment by PunchyHamster 38 minutes ago
Most expense is initial setup and automation, but once you get thru that hump and have non-spiky loads it can be massively cheaper
Comment by subscribed 1 hour ago
Comment by perbu 2 hours ago
Then, say if the motherboard gives up, you have to do quite a bit of work to get it replaced, you might be down for hours or maybe days.
For a single server I don't think it makes sense. For 8 servers, maybe. Depends on the opportunity cost.
Comment by Yeroc 2 hours ago
Comment by acdha 1 hour ago
Using something like AWS can make it easy to assume that servers don’t fail often but that’s because the major players have all of that behind the scenes, heavily tested, and will migrate VMs when prefail indicators trigger but before stuff is done.
Comment by alaudet 1 hour ago
Comment by PunchyHamster 40 minutes ago
Comment by traceroute66 2 hours ago
Have you seen what the LLM crowd have done to server prices ?
Comment by subscribed 1 hour ago
But it's indeed cheaper with high, sustained workloads.
Comment by vb-8448 2 hours ago
Comment by OliverGuy 1 hour ago
Sounds like from the requirement to live migrate you can't really afford planned downtime, so why are you risking unplanned downtime?
Comment by caymanjim 34 minutes ago
This isn't something others should use as an example.
Comment by Zopieux 30 minutes ago
Comment by mlhpdx 30 minutes ago
Comment by ianberdin 1 hour ago
Comment by shermantanktop 46 minutes ago
When I’ve seen this work well, it’s either built into the product as an established feature, or it’s a devops procedure that has a runbook and is done weekly.
Doing it with low level commands and without a lot of experience is pretty likely to have issues. And that’s what happened here.
Comment by ianberdin 1 hour ago
Comment by testing22321 2 hours ago
Moving away from the US also felt great.
Comment by ararangua 2 hours ago
Comment by sylware 1 hour ago
Full of scanners, script kiddies and maybe worse.
Comment by jonahs197 2 hours ago
Comment by subscribed 1 hour ago
Comment by lloydatkinson 1 hour ago
Happened to me.
I now advise people to avoid clown-led services like Hetzner and stick to more reputable, if not as cheap, options.
Comment by xhkkffbf 2 hours ago
Comment by OneMorePerson 53 minutes ago
Comment by infocollector 2 hours ago
Comment by xhkkffbf 2 hours ago
Comment by daveguy 1 hour ago
DigitalOcean just absolutely is just not an enterprise solution. Don't trust it with your data.
Oh, and did I mention I had been paying the upcharge for backups the entire time?
Comment by aungpaing 1 hour ago
Comment by OutOfHere 2 hours ago
As such, I doubt the noted price reduction is reproducible. Combine this with Hetzner's sudden deletions of user accounts and services without warning, and it's a bad proposition. Search r/hetzner and r/vps for hetzner for these words: banned, deleted, terminated; there are many reports. What should stun you even more about it is that Hetzner could ostensibly be closely spying on user data and workloads, even offline workloads, without which they won't even know who to ban.
The only thing that Hetzner might potentially be good for is to add to an expendable distributed compute pool, one that you can afford to lose, but then you might as well also use other bottom-of-the-barrel untrustworthy providers for it too, e.g. OVH.
Comment by swiftcoder 1 hour ago
Comment by 0123456789ABCDE 1 hour ago
> $1,432 to $233
a difference of 5/6 in price does not materially change the decision to move between providers, even with a 40% price increase
Comment by api 1 hour ago
Cloud is ludicrously marked up.
Comment by desireco42 50 minutes ago
Plus, this is not what DHH was doing, he was not saving few bucks, but unlocking potential for his company to thrive.
Comment by sayYayToLife 2 hours ago
Comment by orsorna 2 hours ago
Comment by nixpulvis 2 hours ago
Not everyone likes wasting money.
Comment by dllrr 2 hours ago
Comment by mrweasel 2 hours ago
Comment by thisislife2 2 hours ago
Comment by esafak 2 hours ago
Comment by layer8 2 hours ago
Comment by faangguyindia 1 hour ago
Comment by rolymath 49 minutes ago
Comment by iammrpayments 2 hours ago
Comment by littlestymaar 2 hours ago
Comment by izacus 2 hours ago
Comment by ozgrakkurt 1 hour ago