Tindie store under "scheduled maintenance" for days

Posted by somemisopaste 10 hours ago

Counter107Comment57OpenOriginal

Comments

Comment by sd9 7 hours ago

> The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.

If I were a developer there I would be feeling really not very good. Just minutes of downtime on the systems I’ve worked on gets my heart rate going.

It also feels like there’s a lot being left unsaid in this statement. Normally you would work on these things in parallel to production… so something is seriously wrong.

Comment by mey 5 hours ago

The scenarios I have taken extended downtime for. When an OLTP's DB needed a serious overhaul for some reason and it was cheaper for rollout to plan operational downtime than risk loosing data or inconsistent transactions. Generational platform migration to complete system rewrites (something I am generally against, but that is its own soapbox). Migrating from on-prem to cloud infra, which required design changes. In all cases data integrity/consistency is the critical aspect. Migrating from one db technology to another (MySQL -> PostgreSQL).

In all those cases there is serious planning done before the migration, checklists, trial runs/validations, and validation procedures day off. If something isn't working, the leadership group evaluates the the issue and determines rollback vs go forward. Rollback needs to also be planned for, and your planned downtime window should be considered.

I agree with you, this wording implies they are making changes after this change. This could've been bad planning, a bad call day off, etc.

In one scenario, we _had_ to go forward while resolving several blockers on the fly. We had planned ahead of time developer rotation shifts. Pulling people off the line after 8-12hrs. At some point, you aren't thinking clearly understress. Don't know how big the team is over there is, but I hope they are pacing themselves, during what I am sure is a horrible moment of crisis to them.

My advice to them is, consider a roll back if needed/possible. Split responsibility between who is managing the process and dealing with specific problems. Focus on MVP. Don't try to _fix_ and replace at the same time, if something was broken before business wise, log it in your bug tracker and deal with it later. Pull people away if needed to get rest. Get upper management away from people doing the work, have them only talking to the group handling the process management.

Edit: I am also making a good faith assumption that this is planned and not an emergency response, either way, it doesn't change my general advice.

Comment by 0xfaded 4 hours ago

Conversely, if this is indeed true motivation and management has accepted it, kudos to them. It sounds like the engineers said that the situation is untenable and this is the cover we need to fix it, and they got what they asked for.

Comment by sd9 3 hours ago

I don't know, it just doesn't feel very scheduled to me.

> I'm about to loose thousands of dollars by the end of Monday 20th because of the automatic shipping deadline on Tindie and it currently being down. I've tried contacting support multiple times but they are not helping. Please respond before my business fails!

https://mastodon.social/@thereminhero/116432503640568650

Comment by mikestorrent 4 hours ago

Right? Retail stores close for a few days for renovations and nobody has a heart attack.

Comment by expedition32 3 hours ago

Yeah but they HAVE to be finished on time because otherwise the supermarket manager will have a heart attack.

Comment by Retr0id 9 hours ago

I don't have the links handy but I believe there are some comments from staff on social media that give more details.

Edit: https://hackaday.social/@tindie/116427447318102919

https://hackaday.social/@tindie/116436988752373293

Comment by Aurornis 7 hours ago

The maker people I know have been migrating away from Tindie because it has felt like a sinking ship for a long time.

I really like the idea of Tindie so I hope they can succeed. I don’t understand what sequence of events led to this being such a large problem that they can’t even keep their site online. The post says something vague about the engineering team is hoping the migration work is close to finished, but it’s been years since I remember any engineering team knocking out the entire site for days without being able to restore it during a failed migration. Are they outsourcing dev work to the type of agency that bills by the hour and perpetually churns low hourly cost work to make their money in volume fixing their own code?

Comment by starkparker 7 hours ago

> The maker people I know have been migrating away from Tindie

To what? The only alternative I know of is Lectronz.

Comment by kennywinker 6 hours ago

Shopify, etsy, crowdsupply, a custom website. All have their problems, i’m not endorsing. I sell on tindie. Well, i don’t sell much there, but i list on tindie. Most of my sales come thru my own store site.

Comment by serf 5 hours ago

that just resolves back to the original problem that Tindie solved, discoverability.

It's like saying people are fleeing ebay for Shopify. Yeah, I guess -- but that only really solves the merchant sales problem.

I buy from indie elec shops directly when I can, but the problem is that I commonly discover those shops thru tindie. Word of mouth/discord/etc isn't nearly as a great a tool as a searchable refreshing index.

Comment by sixothree 3 hours ago

For myself at least, discoverability is a huge thing for tindie. I'll go there for something specific and pretty much every single time just poke around until I find something else too. It's kind of like shopping for clothes - I want a new shirt, but some fancy new pants can't hurt.

Comment by JohnMakin 7 hours ago

It can be as simple as a terraform apply wiping out huge swaths of the backend infra, getting that back, depending on how disciplined you are, can take in the order of days/weeks.

Comment by the_biot 9 hours ago

You have to wonder why it's so hard to put that on their 503 error page. I suspect something's much more broken than they're letting on.

Comment by JohnMakin 5 hours ago

This would indicate wherever they were hosting their site on no longer exists. 503's even on pages that should mostly be static suggest the backend no longer exists, or whatever ingress they're using in front of it disappeared. As far as I can tell every single page on their site is 503'ing.

Example of a response I see:

< x-cache: Error from cloudfront < via: 1.1 bdf85d6d4811ab08c57841855a848f8a.cloudfront.net (CloudFront) < x-amz-cf-pop: LAX54-P11 < x-amz-cf-id: nTQ-y1Ut3F-04jUCDM09ordCtj0CMkVmmtZTe__BtzEr1sMJu7rKaw== < age: 76773

Comment by 7 hours ago

Comment by JohnMakin 5 hours ago

They are putting out a lot of stuff that to me is very obvious to read between the lines what led to this because I've been brought in to clean messes like this before:

>The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.

They are saying it was "spring cleaning" or a migration that took out the site for days. "infrastructure getting very fragile" reeks of bad or nonexistent ops practices, probably very little or unreliable IAC (if any, I've seen shops get by for 10+ years by just clicking things in console, til unfortunately it gets to this point).

This though, rubs me the wrong way:

> We want to offer a much better quality of service going foward. We understand that the lack of communication has been frustrating, and I have been closely watching social media and reporting the community's feelings up the chain, so your voices are being heard. The plan was not to have a long outage like this, but due to factors beyond the dev team's control, things have taken much longer than anticipated. Please be patient with us - I will keep updating here and on our other social media.

"Factors beyond the dev teams control." Sorry, no. If you have an ops team, you don't get to toss blame over the wall like that, and if you don't, you have no one to blame but yourselves. I feel bad for whoever the unofficial official ops dude is right now. These kind of infrastructure "tech debt" woopsies come from years of people just not giving a crap to doing things properly, it's never seen as important until it suddenly is. Hope they learn a lesson and hire an infrastructure guy properly. There's long been a persistent delusion in the pure dev world that they should be able to be completely agnostic to the hardware lying underneath their beautiful code - ideally yes, in practice almost never, unless you come from a place that has the significant resources to make something nice like that, or are willing to pay out the azz for managed cloud services or licenses.

Comment by svnt 5 hours ago

It is entirely possible, especially in small companies in my experience, that “factors beyond the dev teams control” means “technical founder with severe myopia and decision fatigue who prevents “complexity”” as they see it, which for them means everything you discuss here as being necessary.

Comment by JohnMakin 4 hours ago

100% agree and seen this exact scenario play out

Comment by ImPostingOnHN 4 hours ago

I didn't take "the dev team" to exclude ops. Ops folks are usually devs, too.

Comment by JohnMakin 4 hours ago

Often, but there are a lot of shops that make them entirely separate silo'd teams, and the symptoms are usually what I am describing here.

Most ops guys can do dev, the inverse is absolutely not true IME.

Comment by Jblx2 4 hours ago

How big of an operation is Tindie? Founder plus one other dev/ops/everything else guy?

Comment by chromacity 9 hours ago

Unfortunate. Tindie is (was?) a pretty unique marketplace. Amusingly, a lot of what they were selling was probably illegal due to FCC rules: for the most part, you can't sell electronics without EMI certification and "I'm just a hobbyist" is not an excuse. Kits get a bit of leeway, but finished products don't.

Before the tariffs, I noticed that Chinese companies were trying to undercut them. I've gotten multiple mails asking me to start selling my designs with China-based outlets: they would make the PCBs, assemble them, and pay me some money for every item sold.

Comment by dbl000 8 hours ago

Can you share more information about the undercutting? I've heard of places like Elecrow trying to incentivize people to sell via their platform/OEM service but it sounds like you've had people asking you to license your designs?

Comment by chromacity 8 hours ago

I never followed up, but I didn't read it as some serious IP licensing thing. It sounded like they've come to the conclusion that they're making the stuff that's sold on Tindie anyway, so might as well set up a website and ship directly to your customers.

Comment by the_axiom 8 hours ago

Free market is a good thing.

Comment by Permik 7 hours ago

It's good until some unregulated electronic device creates interference that makes some poor guys pacemaker act up and kills them.

Comment by EtienneDeLyon 5 hours ago

As a RF expert, I can assure you that is not possible. And basic common sense should tell you why.

It's AM radio that gets interfered with.

Comment by kube-system 4 hours ago

It's not likely, but if you're an expert I'm sure you could think of a few ways it would be possible. The reason we give people with pacemakers a list of machines to avoid is definitely not to waste their time because there is no possible way any of those things could be dangerous to them.

Comment by chromacity 4 hours ago

I mean, more or less, we do. The NIH list includes cell phones, e-cigarettes, and headphones.

Comment by kayson 4 hours ago

As an RF expert I can assure you that I could create a device to wirelessly interfere with a pacemaker. A pathological one, maybe, but the point remains: regulation is needed.

Comment by fluoridation 3 hours ago

The question is whether such interference could be created by a device as a by-product of its normal operation, not by a weapon that's intended to cause harm.

Comment by jdiff 5 hours ago

Blind dogma is rarely a good thing. A free market is not a virtue or end goal in itself, but a means to other ends.

Comment by croes 6 hours ago

Every freedom has limits

Comment by dbl000 8 hours ago

About Sunday/Monday last week right before it went down I noticed the site was supper buggy and failing to add things to cart, I emailed support and got a "we are checking the issue". Since it went down all I've heard from support is "Please be patient. Tindie will be back up soon as we are currently performing maintenance. At this time, we do not have an estimated timeframe to provide."

The fact that it wasn't communicated at all prior and not having a timeframe makes me thing this was probably an ops screw up.

Comment by ottah 7 hours ago

I see this a lot with small independent sites with big userbases. Instead of being honest, they hide mistakes behind maintenance or blame it on hackers.

Comment by iamnothere 9 hours ago

There are a number of things on Tindie that I have been unable to find anywhere else at any price. (Mostly small batch bespoke electronics.) I hope they figure this out.

Comment by sixothree 2 hours ago

As much as people want to be angry about this happening, the value of the thing to the maker community is too great. I hope they can figure this out.

Comment by NDlurker 9 hours ago

I've bought some cool stuff off Tindie. My latest purchase was this set of earrings that alert when you're near a Flock camera

https://colonelpanic.tech/#products

Comment by rozab 7 hours ago

This really tickled me, I wasn't expecting them to just be a pair of esp32 dev boards you attach to your ears

Comment by shrubble 5 hours ago

If you didn’t inform people ahead of time, it’s probably not “scheduled”…

Comment by kibwen 5 hours ago

Scheduledn't maintenance.

Comment by luma 5 hours ago

The site has been on life support for a decade, ownership has changed hands a few times, basic features promised 10 years ago never shipped, API is half implemented (eg. you can download an order but you cannot mark it shipped), and they still have no mechanism to collect state sales tax nor will they submit a 1090 as required by US tax law. I jumped ship 5 years ago when this became too much of a problem and not a single thing has changed in those 5 years.

Tindie was a great place for a hacker to sell a few widgets back in the day, but legal requirements have changed since then but Tindie has not changed a line of code in at least 10 years.

Comment by ottah 7 hours ago

Concerning, a professional development team should have been able to manage this switch with minimal to no downtime. Makes me wonder what other mistakes they're making. I'm reluctant to trust my payment information with them in the future.

Comment by eterm 7 hours ago

Not everyone has seamless blue/green deployment.

However, any downtime over an hour or two screams "migration gone wrong" to me.

Otherwise wouldn't you just roll back to get the site up to come back at it and try again later?

Comment by mattmanser 6 hours ago

In this day and age all it takes is one person who knows what they're doing.

That means they've got zero people who know what they're doing.

Comment by gedy 6 hours ago

So many fairly popular apps, SaaS, etc are on skeleton crew staffing-levels. It'll probably get worse with vibe coding. Though then they'll probably launch Claude Ops, etc now that I think about it.

Comment by kordlessagain 9 hours ago

Who is Tindie?

Comment by dlgeek 9 hours ago

It's like Etsy for small-scale electronics - if you build a cool, niche electronic device as an individual, Tindie is a marketplace to sell in low volume (possibly as a kit).

Comment by dust42 5 hours ago

Tinder for indie (hardware) devs and their customers. I.e. a webshop for indie devs who sell small series of niche hardware.

Comment by ZephyrBlu 7 hours ago

Scheduled maintenance in 2026 is insane

Comment by ThatMedicIsASpy 4 hours ago

The biggest that comes to mind would be Steam.

Comment by nozzlegear 3 hours ago

Blizzard still brings World of Warcraft down every Tuesday for maintenance. It's down right now to apply a new content patch, which they estimated would take 8 hours.

https://us.support.blizzard.com/en/help/article/358479

Comment by jasonjmcghee 5 hours ago

I wonder if someone found an exploit of some sort and they are figuring out how to prevent it?

Either that or catastrophic data issues?

Otherwise so much downtime at once is pretty crazy

Comment by leros 8 hours ago

They must have really bungled something if they can't roll back and get the site operational again.

Comment by systems_glitch 7 hours ago

Yeah this sucks, I have a bunch of hobbyist orders stuck in limbo since last week -- customers have paid, but I can't pull the orders down even through the API.

I really like Tindie as a platform and have been using it since nearly the beginning...but I'd have lost the contract if I pulled this level of nonsense on a customer's production application.

Comment by 9 hours ago

Comment by colechristensen 9 hours ago

:( I really like Tindie and what they're doing

Comment by fortyseven 6 hours ago

Glad I used a privacy.com burner when I bought from them. Quite a while later I found a declined purchased for pizza on the now long-deactivated burner card I used to purchase through them.

Comment by draw_down 9 hours ago

[dead]