Garage – An S3 object store so reliable you can run it outside datacenters
Posted by ibobev 1 day ago
Comments
Comment by adamcharnock 1 day ago
We’ve done some fairly extensive testing internally recently and found that Garage is somewhat easier to deploy in comparison to our existing use of MinIO, but is not as performant at high speeds. IIRC we could push about 5 gigabits of (not small) GET requests out of it, but something blocked it from reaching the 20-25 gigabits (on a 25g NIC) that MinIO could reach (also 50k STAT requests/s, over 10 nodes)
I don’t begrudge it that. I get the impression that Garage isn’t necessarily focussed on this kind of use case.
---
In addition:
Next time we come to this we are going to look at RustFS [1], as well as Ceph/Rook [2].
We can see we're going to have to move away from MinIO in the foreseeable future. My hope is that the alternatives get a boost of interest given the direction MinIO is now taking.
[0]: https://news.ycombinator.com/item?id=46140342
[1]: https://rustfs.com/
[2]: https://rook.io/
Comment by johncolanduoni 19 hours ago
> RustFS is a high-performance, distributed object storage software developed using Rust, the world's most popular memory-safe language.
I’m actually something of a Rust booster, and have used it professionally more than once (including working on a primarily Rust codebase for a while). But it’s hard to take a project’s docs seriously when it describes Rust as “the world’s most popular memory-safe language”. Java, JavaScript, Python, even C# - these all blow it out of the water in popularity and are unambiguously memory safe. I’ve had a lot more segfaults in Rust dependencies than I have in Java dependencies (though both are minuscule in comparison to e.g. C++ dependencies).
Comment by riedel 2 hours ago
[0]
qed
Comment by tormeh 36 minutes ago
Comment by woodruffw 3 hours ago
Comment by PunchyHamster 12 hours ago
Comment by teiferer 16 hours ago
Comment by limagnolia 5 hours ago
Comment by b112 18 hours ago
Then things like this appear:
https://www.phoronix.com/news/First-Linux-Rust-CVE
And I'm all warm and feeling schadenfreude.
To hear "yes, it's safer" and yet not "everyone on the planet not using rust is a moron!!!", is a nice change.
Frankly, the whole cargo side of rust has the same issues that node has, and that's silly beyond comprehension. Memory safe is almost a non-concern, compared to installing random, unvetted stuff. Cargo vet seems barely helpful here.
I'd want any language caring about security and code safety, to have a human audit every single diff, on every single package, and host those specific crates on locked down servers.
No, I don't care about "but that will slow down development and change!". Security needs to be first and front.
And until the Rust community addresses this, and its requirement for 234234 packages, it's a toy.
And yes, it can be done. And no, it doesn't require money. Debian's been doing just this very thing for decades, on a far, far, far larger scale. Debian developers gatekeep. They package. They test and take bug reports on specific packages. This is a solved problem.
Caring about 'memory safe!' is grand, but ignoring the rest of the ecosystem is absurd.
Comment by necovek 16 hours ago
I've long desired this approach (backporting security fixes) to be commercialized instead of the always-up-to-date-even-if-incompatible push, and on top of Red Hat, Suse, Canonical (with LTS), nobody has been doing it for product teams until recently (Chainguard seems to be doing this).
But, if you ignore speed, you also fail: others will build less secure products and conquer the market, and your product has no future.
The real engineering trick is to be fast and build new things, which is why we need supply chain commoditized stewards (for a fee) that will solve this problem for you and others at scale!
Comment by PunchyHamster 12 hours ago
which is a bit silly considering that if you want fast, most packages land in testing/unstable pretty quickly.
Comment by necovek 11 hours ago
I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
Comment by PunchyHamster 10 hours ago
If you need latest packages, you have to do it anyway.
> I believe the sweet spot is Debian-like stable as the base platform to build on top of, and then commercial-support in a similar way for any dependencies you must have more recent versions on top.
That if the company can build packages properly. Also too old OS deps sometimes do throw wrench in the works.
Tho frankly "latest Debian Testing" have far smaller chance breaking something than "latest piece of software that couldn't figure out how to upstream to Debian"
Comment by necovek 7 hours ago
The latter has a huge maintenance burden, the former is the, as I said already, sweet spot. (And let's not talk about combining stable/testing, any machine I tried that on got into an non-upgradeable mess quickly)
I am not saying it is easy, which is exactly why I think it should be a commercial service that you pay for for it to actually survive.
Comment by dotancohen 9 hours ago
> supply chain commoditized stewards (for a fee)
I agree with this, but the open source licenses allow anyone who purchases a stewarded implementation to distribute it freely.I would love to see a software distribution model in which we could pay for vetted libraries, from bodies that we trust, which would become FOSS after a time period - even a month would be fine.
There are flaws in my argument, but it is a safer option than the current normal practices.
Comment by sporkland 17 hours ago
Comment by b112 17 hours ago
I guess the takeaway is that, doubly so, trusting rust code to be memory safe, simply because it is rust isn't sensible. All its protections can simple be invalidated, and an end user would never know.
Comment by teiferer 16 hours ago
Comment by SEJeff 15 hours ago
Comment by nine_k 1 day ago
But it might be interesting to see where the time is spent. I suspect they may be doing fewer things in parallel than MinIO, but maybe it's something entirely different.
Comment by PunchyHamster 12 hours ago
And also it is already rigged for a rug-pull
https://github.com/rustfs/rustfs/blob/main/rustfs/src/licens...
Comment by evil-olive 5 hours ago
from https://rustfs.com/ if you click Documentation, it takes you to their main docs site. there's a nav header at the top, if you click Docs there...it 404s.
"Single Node Multiple Disk Installation" is a 404. ditto "Terminology Explanation". and "Troubleshooting > Node Failure". and "RustFS Performance Comparison".
on the 404 page, there's a "take me home" button...which also leads to a 404.
Comment by __turbobrew__ 1 day ago
Comment by breakingcups 1 day ago
Comment by __turbobrew__ 19 hours ago
Comment by adamcharnock 1 day ago
Comment by hardwaresofton 1 day ago
Comment by Roark66 13 hours ago
At work I'm typically a consumer of such services from large cloud providers. I read in few places how "difficult" it is, how you need "4GB minimum RAM for most services" and how "friends do not let friends run Ceph below 10Gb".
But this setup runs on a non dedicated 2.5Gb interface (there is VLAN segmentation and careful QoSing).
My benchmarks show I'm primarily network latency and bandwidth limited. By the very definition you can't get better than that.
There were many factors why I chose Ceph and not Garage, Seaweed or MinIo. (One of the biggest is that ceph does 2 birds with one stone for me - block and object).
Comment by PunchyHamster 12 hours ago
Also from our experience the docs outright lie about ceph's OSD memory usage and we've seen double or more than what docs claim (8-10GB instead of 4)
Comment by NL807 23 hours ago
I wouldn't be surprised if this will be fixed sometime in the future.
Comment by throwaway894345 20 hours ago
My favorite thing about all of this is that I had just invested a ton of time in understanding MinIO and its Kubernetes operator and got everything into a state that I felt good about. I was nearly ready to deploy it to production when the announcement was released that they would not be supporting it.
I’m somewhat surprised that no one is forking it (or I haven’t heard about any organizations of consequence stepping up anyway) instead of all of these projects to rebuild it from scratch.
Comment by Emjayen 18 hours ago
Comment by PunchyHamster 12 hours ago
Comment by fabian2k 1 day ago
> For the metadata storage, Garage does not do checksumming and integrity verification on its own, so it is better to use a robust filesystem such as BTRFS or ZFS. Users have reported that when using the LMDB database engine (the default), database files have a tendency of becoming corrupted after an unclean shutdown (e.g. a power outage), so you should take regular snapshots to be able to recover from such a situation.
It seems like you can also use SQLite, but a default database that isn't robust against power failure or crashes seems suprising to me.
Comment by lxpz 1 day ago
Comment by __padding 48 minutes ago
Comment by agavra 1 day ago
It's built specifically to run on object storage, currently relies on the `object_store` crate but we're consdering OpenDAL instead so if Garage works with those crates (I assume it does if its S3 compatible) it should just work OOTB.
Comment by evil-olive 5 hours ago
Comment by johncolanduoni 19 hours ago
It’s worth noting too that B+ tree databases are not a fantastic match for ZFS - they usually require extra tuning (block sizes, other stuff like how WAL commits work) to get performance comparable to XFS/ext4. LSMs on the other hand naturally fit ZFS’s CoW internals like a glove.
Comment by fabian2k 1 day ago
Checksumming detects corruption after it happened. A database like Postgres will simply notice it was not cleanly shut down and put the DB into a consistent state by replaying the write ahead log on startup. So that is kind of my default expectation for any DB that handles data that isn't ephemeral or easily regenerated.
But I also likely have the wrong mental model of what Garage does with the metadata, as I wouldn't have expected that to be ever limited by Sqlite.
Comment by lxpz 1 day ago
We do recommend SQLite in our quick-start guide to setup a single-node deployment for small/moderate workloads, and it works fine. The "real world deployment" guide recommends LMDB because it gives much better performance (with the current status of Garage, not to say that this couldn't be improved), and the risk of critical data loss is mitigated by the fact that such a deployment would use multi-node replication, meaning that the data can always be recovered from another replica if one node is corrupted and no snapshot is available. Maybe this should be worded better, I can see that the alarmist wording of the deployment guide is creating quite a debate so we probably need to make these facts clearer.
We are also experimenting Fjall as an alternate KV engine based on LSM, as it theoretically has good speed and crash resilience, which would make it the best option. We are just not recommending it by default yet, as we don't have much data to confirm that it works up to these expectations.
Comment by BeefySwain 1 day ago
Comment by lxpz 1 day ago
Comment by srcreigh 1 day ago
If you use WITHOUT ROWID, you traverse only the BLOB->data tree.
Looking up lexicographically similar keys gets a huge performance boost since sqlite can scan a B-Tree node and the data is contiguous. Your current implementation is chasing pointers to random locations in a different b-tree.
I'm not sure exactly whether on disk size would get smaller or larger. It probably depends on the key size and value size compared to the 64 bit rowids. This is probably a well studied question you could find the answer to.
[1]: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...
Comment by asa400 2 hours ago
Comment by lxpz 1 day ago
Comment by tensor 1 day ago
Comment by rapnie 1 day ago
Comment by skrtskrt 1 day ago
Comment by lxpz 1 day ago
Comment by __turbobrew__ 1 day ago
Comment by patmorgan23 1 day ago
Comment by VerifiedReports 19 hours ago
Comment by kqr 18 hours ago
Comment by VerifiedReports 7 hours ago
A used-car lot
A value-added tax
A key-based access system
When you have two exclusive options, two sides to a situation, or separate things; you separate them with a slash: An on/off switch
A win/win situation
A master/slave arrangement
Therefore a key-value store and a key/value store are quite different.Comment by kqr 6 hours ago
It's true that key–value store shouldn't be written with a hyphen. It should be written with an en dash, which is used "to contrast values or illustrate a relationship between two things [... e.g.] Mother–daughter relationship"
https://en.wikipedia.org/wiki/Dash#En_dash
I just didn't want to bother with typography at that level of pedanticism.
Comment by VerifiedReports 5 hours ago
"...the slash is now used to represent division and fractions, as a date separator, in between multiple alternative or related terms"
-Wikipedia
And what is a key/value store? A store of related terms.
And if you had a system that only allowed a finite collection of key values, where might you put them? A key-value store.
Comment by kqr 5 hours ago
Comment by abustamam 19 hours ago
Comment by VerifiedReports 7 hours ago
Comment by DonHopkins 9 hours ago
Comment by yupyupyups 22 hours ago
Standard filesystems such as ext4 and xfs don't have data checksumming, so you'll have to rely on another layer to provide integrity. Regardless, that's not garage's job imo. It's good that they're keeping their design simple and focus their resources on implementing the S3 spec.
Comment by igor47 1 day ago
Comment by dsvf 1 day ago
Comment by archon810 1 day ago
Comment by mbreese 1 day ago
Comment by lxpz 1 day ago
Comment by nijave 21 hours ago
LMDB mode also runs with flush/syncing disabled
Comment by moffkalast 1 day ago
If you really live somewhere with frequent outages, buy an industrial drive that has a PLP rating. Or get a UPS, they tend to be cheaper.
Comment by crote 1 day ago
As I understood it, the capacitors on datacenter-grade drives are to give it more flexibility, as it allows the drive to issue a successful write response for cached data: the capacitor guarantees that even with a power loss the write will still finish, so for all intents and purposes it has been persisted, so an fsync can return without having to wait on the actual flash itself, which greatly increases performance. Have I just completely misunderstood this?
Comment by unsnap_biceps 1 day ago
https://documents.westerndigital.com/content/dam/doc-library...
Comment by toomuchtodo 1 day ago
Comment by patmorgan23 1 day ago
Comment by Aerolfos 1 day ago
That doesn't even help if fsync() doesn't do what developers expect: https://danluu.com/fsyncgate/
I think this was the blog post that had a bunch more stuff that can go wrong too: https://danluu.com/deconstruct-files/
But basically fsync itself (sometimes) has dubious behaviour, then OS on top of kernel handles it dubiously, and then even on top of that most databases can ignore fsync erroring (and lie that the data was written properly)
So... yes.
Comment by Nextgrid 1 day ago
Unfortunately they do: https://news.ycombinator.com/item?id=38371307
Comment by btown 1 day ago
Comment by Nextgrid 1 day ago
Yes, otherwise those drives wouldn't work at all and would have a 100% warranty return rate. The reason they get away with it is that the misbehavior is only a problem in a specific edge-case (forgetting data written shortly before a power loss).
Comment by unsnap_biceps 1 day ago
Comment by SomaticPirate 1 day ago
https://www.repoflow.io/blog/benchmarking-self-hosted-s3-com... was useful.
RustFS also looks interesting but for entirely non-technical reasons we had to exclude it.
Anyone have any advice for swapping this in for Minio?
Comment by dpedu 1 day ago
https://github.com/versity/versitygw
I am also curious how Ceph S3 gateway compares to all of these.
Comment by skrtskrt 1 day ago
They just completely swapped out the whole service from the stack and wrote one in Go because of how much better the concurrency management was, and Ceph's team and codebase C++ was too resistant to change.
Comment by jiqiren 1 day ago
Comment by zipzad 1 day ago
Comment by chrislusf 22 hours ago
Why skipping SeaweedFS? It rank #1 on all benchmarks, and has a lot of features.
Comment by meotimdihia 21 hours ago
Comment by magicalhippo 15 hours ago
Not a concern for many use-cases, just something to be aware of as it's not a universal solution.
[1]: https://github.com/seaweedfs/seaweedfs?tab=readme-ov-file#st...
Comment by chrislusf 3 hours ago
Comment by magicalhippo 2 hours ago
Comment by ted_dunning 15 hours ago
Comment by Implicated 1 day ago
Able/willing to expand on this at all? Just curious.
Comment by misnome 14 hours ago
Otherwise, the built in admin on one-executable was nice, and support for tiered storage, but single node parallel write performance was pretty unimpressive and started throwing strange errors (investigating of which led to the AI ticket discovery).
Comment by NitpickLawyer 1 day ago
Comment by lima 1 day ago
I'm not sure if it even has any sort of cluster consensus algorithm? I can't imagine it not eating committed writes in a multi-node deployment.
Garage and Ceph (well, radosgw) are the only open source S3-compatible object storage which have undergone serious durability/correctness testing. Anything else will most likely eat your data.
Comment by dewey 1 day ago
Comment by NitpickLawyer 1 day ago
> Beijing Address: Area C, North Territory, Zhongguancun Dongsheng Science Park, No. 66 Xixiaokou Road, Haidian District, Beijing
> Beijing ICP Registration No. 2024061305-1
Comment by dewey 1 day ago
Comment by scottydelta 1 day ago
Comment by klooney 1 day ago
Comment by thhck 1 day ago
Comment by codethief 1 day ago
Comment by isoprophlex 1 day ago
Comment by self_awareness 17 hours ago
Comment by PunchyHamster 12 hours ago
* no lifecycle management of any kind - if you're using it for backups you can't set "don't delete versions for 3 months", so if anyone takes hold of your key, you backups are gone. I relied on minio's lifecycle management for that but it's feature missing in garage (and to be fair, most other) S3
* no automatic mirroring (if you want to have second copy in something other than garage or just don't want to have a cluster but rather have more independent nodes)
* ACLs for access are VERY limited - can't make a key access only sub-path, can't make a "master key" (AFAIK, couldn't find an option) that can access all the buckets so the previous point is also harder - I can't easily use rclone to mirror entire instance somewhere else unless I write scrip iterating over buckets and adding them bucket by bucket to key ACK
* Web hosting features are extremely limited so you won't be able to say set CORS headers for the bucket
* No ability to set keys - you can only generate on inside garage or import garage-formatted one - which means you can't just migrate storage itself, you have to re-generate every key. It also makes automating it harder, in case of minio you can pre-generate key and pass then fed it to clients and to the minio key command, here you have to do the dance of "generate with tool" -> "scrape and put into DB" -> put onto clients.
Overall I like the software a lot but if you have setup that uses those features, beware.
Comment by coldtea 12 hours ago
If someone gets a hold of your key, can't they also just change your backup deletion policy, even if it supported one?
Comment by PunchyHamster 10 hours ago
Minio have full on ACLs so you can just create a key that can only write/read but not change any settings like that.
So you just need to keep the "master key" that you use for setup away from potentially vulnerable devices, the "backup key" doesn't need those permissions.
Comment by topspin 1 day ago
Garage looks really nice: I've evaluated it with test code and benchmarks and it looks like a winner. Also, very straightforward deployment (self contained executable) and good docs.
But no tags on objects is a pretty big gap, and I had to shelve it. If Garage folk see this: please think on this. You obviously have the talent to make a killer application, but tags are table stakes in the "cloud" API world.
Comment by lxpz 1 day ago
Comment by topspin 1 day ago
I really, really appreciate that Garage accommodates running as a single node without work-arounds and special configuration to yield some kind of degraded state. Despite the single minded focus on distributed operation you no doubt hear endlessly (as seen among some comments here,) there are, in fact, traditional use cases where someone will be attracted to Garage only for the API compatibility, and where they will achieve availability in production sufficient to their needs by means other than clustering.
Comment by VerifiedReports 19 hours ago
Comment by topspin 18 hours ago
Arbitrary name+value pairs attached to S3 objects and buckets, and readily available via the S3 API. Metadata, basically. AWS has some tie-ins with permissions and other features, but tags can be used for any purpose. You might encode video multiple times at different bitrates, and store the rate in a tag on each object, for example. Tags are an affordance used by many applications for countless purposes.
Comment by VerifiedReports 6 hours ago
Comment by ai-christianson 1 day ago
It's a really cool system for hyper converged architecture where storage requests can pull data from the local machine and only hit the network when needed.
Comment by singpolyma3 1 day ago
Comment by Powdering7082 1 day ago
Comment by munro 1 day ago
Comment by lxpz 1 day ago
Erasure coding is another debate, for now we have chosen not to implement it, but I would personally be open to have it supported by Garage if someone codes it up.
Comment by hathawsh 1 day ago
Comment by Dylan16807 1 day ago
Comment by faizshah 1 day ago
Comment by supernes 1 day ago
Comment by JonChesterfield 1 day ago
Comment by lxpz 1 day ago
The assumption Garage makes, which is well-documented, is that of 3 replica nodes, only 1 will be in a crash-like situation at any time. With 1 crashed node, the cluster is still fully functional. With 2 crashed nodes, the cluster is unavailable until at least one additional node is recovered, but no data is lost.
In other words, Garage makes a very precise promise to its users, which is fully respected. Database corruption upon power loss enters in the definition of a "crash state", similarly to a node just being offline due to an internet connection loss. We recommend making metadata snapshots so that recovery of a crashed node is faster and simpler, but it's not required per se: Garage can always start over from an empty database and recover data from the remaining copies in the cluster.
To talk more about concrete scenarios: if you have 3 replicas in 3 different physical locations, the assumption of at-most one crashed node is pretty reasonable, it's quite unlikely that 2 of the 3 locations will be offline at the same time. Concerning data corruption on a power loss, the probability to lose power at 3 distant sites at the exact same time with the same data in the write buffers is extremely low, so I'd say in practice it's not a problem.
Of course, this all implies a Garage cluster running with 3-way replication, which everyone should do.
Comment by JonChesterfield 1 day ago
Comment by lxpz 1 day ago
Comment by jiggawatts 1 day ago
Comment by lxpz 1 day ago
Comment by Dylan16807 1 day ago
If it's just the write buffer at risk, that's fine. But the chance of overlapping power loss across multiple sites isn't low enough to risk all the existing data.
Comment by InitialBP 1 day ago
Comment by eduardogarza 1 day ago
Previously I used LocalStack S3 but ultimately didn't like the lack of persistance thats not available on the OSS verison. MinIO OSS is apparently no longer maintained? Also looked at SeaweedFS and RustFS but from a quick reading into them this once was the easiest to set up.
Comment by chrislusf 22 hours ago
Just run "weed sever -s3 -dir=..." to have an object store.
Comment by eduardogarza 20 hours ago
Comment by awoimbee 1 day ago
Comment by ianopolous 13 hours ago
https://garagehq.deuxfleurs.fr/blog/2022-ipfs/
Let's talk!
Comment by tenacious_tuna 19 hours ago
Comment by apawloski 1 day ago
Comment by lxpz 1 day ago
Conditionnal writes : no, we can't do it with CRDTs, which are the core of Garage's design.
Comment by skrtskrt 1 day ago
https://dd.thekkedam.org/assets/documents/publications/Repor... http://www.bailis.org/papers/ramp-sigmod2014.pdf
Comment by lxpz 1 day ago
Comment by skrtskrt 1 day ago
Comment by wyattjoh 1 day ago
Comment by agwa 1 day ago
Comment by codethief 1 day ago
Comment by k__ 23 hours ago
Does anyone know a good open source S3 alternarive that's easily extendable with custom storage backends?
For example, AWS offers IA and Glacier in addition to the defaults.
Comment by onionjake 23 hours ago
Comment by yupyupyups 21 hours ago
This is used for ransomware resistant backups.
Comment by allanrbo 1 day ago
Comment by lxpz 1 day ago
Syncthing will synchronize a full folder between an arbitrary number of machines, but you still have to access this folder one way or another.
Garage provides an HTTP API for your data, and handles internally the placement of this data among a set of possible replica nodes. But the data is not in the form of files on disk like the ones you upload to the API.
Syncthing is good for, e.g., synchronizing your documents or music collection between computers. Garage is good as a storage service for back-ups with e.g. Restic, for media files stored by a web application, for serving personal (static) web sites to the Internet. Of course, you can always run something like Nextcloud in front of Garage and get folder synchronization between computers somewhat like what you would get with Syncthing.
But to answer your question, yes, Garage only provides a S3-compatible API specifically.
Comment by sippeangelo 1 day ago
Comment by Eikon 1 day ago
Comment by chrislusf 22 hours ago
Comment by ekjhgkejhgk 1 day ago
Comment by BOOSTERHIDROGEN 14 hours ago
Comment by doctorpangloss 1 day ago
this is the reliability question no?
Comment by lxpz 1 day ago
https://archive.fosdem.org/2024/schedule/event/fosdem-2024-3...
Slides are available here:
https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4efc8...