Tindie store under "scheduled maintenance" for days
Posted by somemisopaste 10 hours ago
Comments
Comment by sd9 7 hours ago
If I were a developer there I would be feeling really not very good. Just minutes of downtime on the systems I’ve worked on gets my heart rate going.
It also feels like there’s a lot being left unsaid in this statement. Normally you would work on these things in parallel to production… so something is seriously wrong.
Comment by mey 5 hours ago
In all those cases there is serious planning done before the migration, checklists, trial runs/validations, and validation procedures day off. If something isn't working, the leadership group evaluates the the issue and determines rollback vs go forward. Rollback needs to also be planned for, and your planned downtime window should be considered.
I agree with you, this wording implies they are making changes after this change. This could've been bad planning, a bad call day off, etc.
In one scenario, we _had_ to go forward while resolving several blockers on the fly. We had planned ahead of time developer rotation shifts. Pulling people off the line after 8-12hrs. At some point, you aren't thinking clearly understress. Don't know how big the team is over there is, but I hope they are pacing themselves, during what I am sure is a horrible moment of crisis to them.
My advice to them is, consider a roll back if needed/possible. Split responsibility between who is managing the process and dealing with specific problems. Focus on MVP. Don't try to _fix_ and replace at the same time, if something was broken before business wise, log it in your bug tracker and deal with it later. Pull people away if needed to get rest. Get upper management away from people doing the work, have them only talking to the group handling the process management.
Edit: I am also making a good faith assumption that this is planned and not an emergency response, either way, it doesn't change my general advice.
Comment by 0xfaded 4 hours ago
Comment by sd9 3 hours ago
> I'm about to loose thousands of dollars by the end of Monday 20th because of the automatic shipping deadline on Tindie and it currently being down. I've tried contacting support multiple times but they are not helping. Please respond before my business fails!
Comment by mikestorrent 4 hours ago
Comment by expedition32 3 hours ago
Comment by Retr0id 9 hours ago
Comment by Aurornis 7 hours ago
I really like the idea of Tindie so I hope they can succeed. I don’t understand what sequence of events led to this being such a large problem that they can’t even keep their site online. The post says something vague about the engineering team is hoping the migration work is close to finished, but it’s been years since I remember any engineering team knocking out the entire site for days without being able to restore it during a failed migration. Are they outsourcing dev work to the type of agency that bills by the hour and perpetually churns low hourly cost work to make their money in volume fixing their own code?
Comment by starkparker 7 hours ago
To what? The only alternative I know of is Lectronz.
Comment by kennywinker 6 hours ago
Comment by serf 5 hours ago
It's like saying people are fleeing ebay for Shopify. Yeah, I guess -- but that only really solves the merchant sales problem.
I buy from indie elec shops directly when I can, but the problem is that I commonly discover those shops thru tindie. Word of mouth/discord/etc isn't nearly as a great a tool as a searchable refreshing index.
Comment by sixothree 3 hours ago
Comment by JohnMakin 7 hours ago
Comment by the_biot 9 hours ago
Comment by JohnMakin 5 hours ago
Example of a response I see:
< x-cache: Error from cloudfront < via: 1.1 bdf85d6d4811ab08c57841855a848f8a.cloudfront.net (CloudFront) < x-amz-cf-pop: LAX54-P11 < x-amz-cf-id: nTQ-y1Ut3F-04jUCDM09ordCtj0CMkVmmtZTe__BtzEr1sMJu7rKaw== < age: 76773
Comment by JohnMakin 5 hours ago
>The goal of the current maintenance is to fix a lot of long-standing issues with the site. The underlying infrastructure was getting very fragile as technical debt accumulated over time. A team is working very hard right now to make sure that once the site is back up, it's on much better footing and will be solid and reliable for the long term. Despite the unfortunate amount of time this is taking, it will be a major benefit to the site in the long run.
They are saying it was "spring cleaning" or a migration that took out the site for days. "infrastructure getting very fragile" reeks of bad or nonexistent ops practices, probably very little or unreliable IAC (if any, I've seen shops get by for 10+ years by just clicking things in console, til unfortunately it gets to this point).
This though, rubs me the wrong way:
> We want to offer a much better quality of service going foward. We understand that the lack of communication has been frustrating, and I have been closely watching social media and reporting the community's feelings up the chain, so your voices are being heard. The plan was not to have a long outage like this, but due to factors beyond the dev team's control, things have taken much longer than anticipated. Please be patient with us - I will keep updating here and on our other social media.
"Factors beyond the dev teams control." Sorry, no. If you have an ops team, you don't get to toss blame over the wall like that, and if you don't, you have no one to blame but yourselves. I feel bad for whoever the unofficial official ops dude is right now. These kind of infrastructure "tech debt" woopsies come from years of people just not giving a crap to doing things properly, it's never seen as important until it suddenly is. Hope they learn a lesson and hire an infrastructure guy properly. There's long been a persistent delusion in the pure dev world that they should be able to be completely agnostic to the hardware lying underneath their beautiful code - ideally yes, in practice almost never, unless you come from a place that has the significant resources to make something nice like that, or are willing to pay out the azz for managed cloud services or licenses.
Comment by svnt 5 hours ago
Comment by JohnMakin 4 hours ago
Comment by ImPostingOnHN 4 hours ago
Comment by JohnMakin 4 hours ago
Most ops guys can do dev, the inverse is absolutely not true IME.
Comment by Jblx2 4 hours ago
Comment by chromacity 9 hours ago
Before the tariffs, I noticed that Chinese companies were trying to undercut them. I've gotten multiple mails asking me to start selling my designs with China-based outlets: they would make the PCBs, assemble them, and pay me some money for every item sold.
Comment by dbl000 8 hours ago
Comment by chromacity 8 hours ago
Comment by the_axiom 8 hours ago
Comment by Permik 7 hours ago
Comment by EtienneDeLyon 5 hours ago
It's AM radio that gets interfered with.
Comment by kube-system 4 hours ago
Comment by chromacity 4 hours ago
Comment by kayson 4 hours ago
Comment by fluoridation 3 hours ago
Comment by jdiff 5 hours ago
Comment by croes 6 hours ago
Comment by dbl000 8 hours ago
The fact that it wasn't communicated at all prior and not having a timeframe makes me thing this was probably an ops screw up.
Comment by ottah 7 hours ago
Comment by iamnothere 9 hours ago
Comment by sixothree 2 hours ago
Comment by NDlurker 9 hours ago
Comment by rozab 7 hours ago
Comment by shrubble 5 hours ago
Comment by kibwen 5 hours ago
Comment by luma 5 hours ago
Tindie was a great place for a hacker to sell a few widgets back in the day, but legal requirements have changed since then but Tindie has not changed a line of code in at least 10 years.
Comment by ottah 7 hours ago
Comment by eterm 7 hours ago
However, any downtime over an hour or two screams "migration gone wrong" to me.
Otherwise wouldn't you just roll back to get the site up to come back at it and try again later?
Comment by mattmanser 6 hours ago
That means they've got zero people who know what they're doing.
Comment by gedy 6 hours ago
Comment by kordlessagain 9 hours ago
Comment by dlgeek 9 hours ago
Comment by dust42 5 hours ago
Comment by ZephyrBlu 7 hours ago
Comment by ThatMedicIsASpy 4 hours ago
Comment by nozzlegear 3 hours ago
Comment by jasonjmcghee 5 hours ago
Either that or catastrophic data issues?
Otherwise so much downtime at once is pretty crazy
Comment by leros 8 hours ago
Comment by systems_glitch 7 hours ago
I really like Tindie as a platform and have been using it since nearly the beginning...but I'd have lost the contract if I pulled this level of nonsense on a customer's production application.
Comment by colechristensen 9 hours ago
Comment by fortyseven 6 hours ago
Comment by draw_down 9 hours ago