$50 PlanetScale Metal Is GA for Postgres
Posted by ksec 21 hours ago
Comments
Comment by orliesaurus 20 hours ago
From what I can tell, the 'Metal' offering runs on nodes with directly attached NVMe rather than network-attached storage. That means there isn't a per-customer IOPS cap – they actually market it as 'unlimited I/O' because you hit CPU before saturating the disk. The new $50 M-class clusters are essentially smaller versions of those nodes with adjustable CPU and RAM in AWS and GCP .
RE: EC2 shapes, it's not a shared EBS volume but a dedicated instance with local storage. BUT you'll still want to monitor capacity since the storage doesn't autoscale.
ALSO this pricing makes high-throughput Postgres accessible for indie projects, which is pretty neat.
Comment by rcrowley 20 hours ago
Just want to add that you don't necessarily need to invest in fancy disk-usage monitoring as we always display it in the app and we start emailing database owners at 60% full to make sure no one misses it.
Comment by JoshGlazebrook 20 hours ago
So in the M-10 case, wouldn't this actually be somewhat misleading as I imagine hitting "1/8 vCPU" wouldn't be difficult at all?
Comment by rcrowley 19 hours ago
You can get a lot more out of that CPU allocation with the fast I/O of a local NVMe drive than from the slow I/O of an EBS volume.
Comment by everfrustrated 19 hours ago
You're still sharing nvme IO, cpu, memory bandwidth, etc. Not having a VM isn't really the point. (EDIT: and could have been done with non-metal aws instances with direct-attached nvme anyway)
Comment by rcrowley 19 hours ago
Comment by bsnnkv 14 hours ago
Comment by fosterfriends 18 hours ago
Comment by dodomodo 20 hours ago
Comment by samlambert 20 hours ago
Comment by samlambert 21 hours ago
Comment by solatic 20 hours ago
Comment by rcrowley 20 hours ago
If your or another customer's workload grows and needs to size up we launch three whole new database servers of the appropriate size (whether that's more CPU+RAM, more storage, or both), restore the most recent backups there, catch up on replication, and then orchestrate changing the primary.
Downtime when you resize typically amounts to needing to reconnect i.e. it's negligible.
Comment by whalesalad 20 hours ago
Comment by samlambert 20 hours ago
Comment by solatic 20 hours ago
Even to take a case in point where durability is irrelevant - people building caches in Postgres (so as to only have one datastore / not need Redis as well). Not a big deal if the cache blows up - just force everyone to login again. Would love to see the vendor reduce complexity on their end and pass through the savings to the customer.
edit: per your other reply re. using replication to handle resizing, maybe being upfront with customers about additional latency / downtime being necessary with single-node discounts, then for resizing you could break connections, take a backup, then restore the backup on a resized node?
Comment by wessorh 18 hours ago
asking for a friend that liked this space
Comment by ksec 18 hours ago
Comment by wackget 19 hours ago
*Edit:* It also fails to load other pages if you have JavaScript or XHR disabled.
Comment by HatchedLake721 19 hours ago
It feels it went from "professional Stripe level design that you admire and it inspires you" to just "hard to read black website", not sure what for.
(not fully functional) https://web.archive.org/web/20240811142248/https://planetsca...
Comment by dukepiki 17 hours ago
Comment by HatchedLake721 2 hours ago
Comment by mesmertech 19 hours ago
Comment by samdoesnothing 18 hours ago
Comment by heliumtera 19 hours ago
Comment by croemer 18 hours ago
Comment by buremba 18 hours ago
Comment by taw1285 20 hours ago
Comment by mjb 19 hours ago
- Aurora storage scales with your needs, meaning that you don't need to worry about running out of space as your data grows. - Aurora will auto-scale CPU and memory based on the needs of your application, within the bounds you set. It does this without any downtime, or even dropping connections. You don't have to worry about choosing the right CPU and memory up-front, and for most applications you can simply adjust your limits as you go. This is great for applications that are growing over time, or for applications with daily or weekly cycles of usage.
The other Aurora option is Aurora DSQL. The advantages of picking DSQL are:
- A generous free tier to get you going with development. - Scale-to-zero and scale-up, on storage, CPU, and memory. If you aren't sending any traffic to your database it costs you nothing (except storage), and you can scale up to millions of transactions per second with no changes. - No infrastructure to configure or manage, no updates, no thinking about replicas, etc. You don't have to understand CPU or memory ratios, think about software versions, think about primaries and secondaries, or any of that stuff. High availability, scaling of reads and writes, patching, etc is all built-in.
Comment by mikkelam 19 hours ago
Comment by samlambert 20 hours ago
Comment by anoojb 20 hours ago
Wouldn't this introduce additional latency among other issues?
Comment by wrs 20 hours ago
If you aren’t hosting the app in the same AWS/GCP region then I still have the same question.
Comment by lab14 18 hours ago
yes and no. In my AWS account I can explicitly pick an AZ (us-east-2a, us-east-2b or us-east-2c) but Availability Zones are not consistent between AWS accounts.
See https://docs.aws.amazon.com/ram/latest/userguide/working-wit...
Comment by rcrowley 20 hours ago
Comment by ShakataGaNai 20 hours ago
Comment by FancyFane 19 hours ago
I ask because we see it more often than not, and for that situation sharding the workflow is the best answer. Why have one MySQL instance responding to request when you could have 2,4,8...128, etc MySQL instances responding as a single database instance? They also have the ability to vertically scale each of the shards in that database as it's needed.
Comment by carlm42 20 hours ago
Comment by buster 20 hours ago
Comment by rcrowley 20 hours ago
Comment by Hawkenfall 20 hours ago
Comment by ngalstyan4 21 hours ago
Would be curious to know what the underlying aws ec2 instance is.
Is each DB on a dedicated instance?
If not, are there per-customer iops bounds?
Comment by rcrowley 20 hours ago
Comment by samlambert 20 hours ago
Comment by orphea 20 hours ago
> $50
Looks like US only. Choosing Europe is +$10, Australia is +$20.Comment by boundlessdreamz 19 hours ago
Comment by kelp 18 hours ago
Comment by unbelievably 20 hours ago
Comment by everfrustrated 19 hours ago
Comment by carlm42 20 hours ago
Comment by dig1 19 hours ago
Also, this is a shared server, not a truly dedicated one like you’d get with bare-metal providers. So, calling it "Metal" might be misleading marketing trick, but if you want someone to always blame and don’t mind overpaying for that comfort, then the managed option might be the right thing.
Comment by unbelievably 20 hours ago
edit: my bad that's the price for 256GB RAM.
Comment by carlm42 19 hours ago
Comment by krawcu 6 hours ago
Comment by tempest_ 20 hours ago
The reality most databases are tiny as shit and most apps can tolerate the massive latency that the cloud provider dbs offer.
It is why it is sorta funny we are rediscovering non network attached storage is faster.
Comment by solatic 19 hours ago
That's $54,348/year, not including the cost of benefits, not including stock compensation. Let's say you reserve 20% for benefits and that comes out to $43,478.40 in salary.
Besides the benefit of not needing the management / communication overhead of hiring somebody, do you know any DBAs willing to take a full-time job for $43,478.40 in salary?
Comment by unbelievably 19 hours ago
Comment by solatic 2 hours ago
Comment by cheema33 18 hours ago
Apparently there are people who find this offering compelling. The lack of value is quite stunning to me.
Comment by bigTMZfan 19 hours ago
Comment by rcrowley 19 hours ago
Comment by vivzkestrel 20 hours ago
Comment by rcrowley 19 hours ago
Comment by Onavo 19 hours ago
How does cross data center nodes work?
Comment by rcrowley 19 hours ago
Comment by skeptrune 21 hours ago