Laws of Software Engineering
Posted by milanm081 12 hours ago
Comments
Comment by GuB-42 8 hours ago
There are few principle of software engineering that I hate more than this one, though SOLID is close.
It is important to understand that it is from a 1974 paper, computing was very different back then, and so was the idea of optimization. Back then, optimizing meant writing assembly code and counting cycles. It is still done today in very specific applications, but today, performance is mostly about architectural choices, and it has to be given consideration right from the start. In 1974, these architectural choices weren't choices, the hardware didn't let you do it differently.
Focusing on the "critical 3%" (which imply profiling) is still good advice, but it will mostly help you fix "performance bugs", like an accidentally quadratic algorithms, stuff that is done in loop but doesn't need to be, etc... But once you have dealt with this problem, that's when you notice that you spend 90% of the time in abstractions and it is too late to change it now, so you add caching, parallelism, etc... making your code more complicated and still slower than if you thought about performance at the start.
Today, late optimization is just as bad as premature optimization, if not more so.
Comment by austin-cheney 8 hours ago
I really encourage people to read the Donald Knuth essay that features this sentiment. Pro tip: You can skip to the very end of the article to get to this sentiment without losing context.
Here ya go: https://dl.acm.org/doi/10.1145/356635.356640
Basically, don't spend unnecessary effort increasing performance in an unmeasured way before its necessary, except for those 10% of situations where you know in advance that crucial performance is absolutely necessary. That is the sentiment. I have seen people take this to some bizarre alternate insanity of their own creation as a law to never measure anything, typically because the given developer cannot measure things.
Comment by iamflimflam1 8 hours ago
Similar to the "code should be self documenting - ergo: We don't write any comments, ever"
Comment by f1shy 7 hours ago
Comment by afpx 6 hours ago
(Then, shortly afterward I also tried to find a new job, realized the entire industry had changed, and was fortunate enough to decide it wasn't worth the trouble.)
Comment by WalterBright 6 hours ago
That's likely thanks to C which goes to great pains to not specify the size of the basic types. For example, for 64 bit architectures, "long" is 32 bits on the Mac and 64 bits everywhere else.
The net result of that is I never use C "long", instead using "int" and "long long".
This mess is why D has 32 bit ints and 64 bit longs, whether it's a 32 bit machine or a 64 bit machine. The result was we haven't had porting problems with integer sizes.
Comment by switchbak 6 hours ago
I've met very few folks who understand the overheads involved, and how extreme the benefits can be from avoiding those.
Comment by Quarrelsome 5 hours ago
The sort of insane stuff I've seen on the dotnet repo where people are trying to tear apart the entire type system just because they think they've cracked some secret performance code.
Comment by awesome_dude 32 minutes ago
If you ask a typical grad the size of a bool they will inevitably say one bit, but, CPUs and RAM, etc don't work like that, typically they expect WORD sized chunks of memory - meaning that the boolean size of one but becomes a WORD sized chunk, assuming that it hasn't been packed
Comment by afpx 6 hours ago
Comment by WalterBright 6 hours ago
To be fair, though, I come up short on a lot of things comp sci graduates know.
It's why Andrei Alexandrescu and I made a good team. I was the engineer, and he the scientist. The yin and the yang, so to speak.
Comment by sas224dbm 1 hour ago
Comment by SAI_Peregrinus 5 hours ago
Comment by bluGill 5 hours ago
Comment by ekidd 6 hours ago
If the number of bits isn't actually included right in the type name, then be very sure you know what you're doing.
The senior engineer answer to "How many bits are there in an int?" is "No, stop, put that down before you put your eye out!" Which, to be fair, is the senior engineer answer to a lot of things.
Comment by estimator7292 5 hours ago
On the other, the right answer is 16 or 32. It's not the correct answer, strictly speaking, but it is the right one.
Comment by jandrewrogers 5 hours ago
Comment by fragmede 4 hours ago
Comment by i_am_a_peasant 2 hours ago
Comment by didgetmaster 4 hours ago
He stopped me an said he was just looking to see if I knew what an INT 3 was. He said few engineers he interviewed had any idea.
Comment by alexjplant 5 hours ago
It should be to the greatest extent possible. Strive to write literate code before writing a comment. Comments should be how and why, not what.
> - ergo: We don't write any comments, ever"
Indeed this does not logically follow. Writing fluent, idiomatic code with real names for symbols and obvious control flow beats writing brain teasers riddled with comments that are necessary because of the difficulty in parsing a 15-line statement with triply-nested closures and single-letter variable names. There's a wide middle ground where comments are leveraged, not made out of necessity.
Comment by Sharlin 4 hours ago
Comment by alexjplant 4 hours ago
Comment by wombatpm 4 hours ago
Comment by p0nce 7 hours ago
Comment by msla 8 hours ago
My counterpoint: Code can be self-documenting, reality isn't. You can have a perfectly clear method that does something nobody will ever understand unless you have plenty of documentation about why that specific thing needs to be done, and why it can't be simpler. Like having special-casing for DST in Arizona, which no other state seems to need:
Comment by pc86 6 hours ago
Comment by msla 6 hours ago
Comment by switchbak 6 hours ago
Comment by rkaregaran 7 hours ago
Comment by sandeepkd 6 hours ago
Comment by Sammi 8 hours ago
I'm still salty about that time a colleague suggested adding a 500 kb general purpose js library to a webapp that was already taking 12 seconds on initial load, in order to fix a tiny corner case, when we could have written our own micro utility in 20 lines. I had to spend so much time advocating to management for my choice to spend time writing that utility myself, because of that kind of garbage opinion that is way too acceptable in our industry today. The insufferable bastard kept saying I had to do measurements in order to make sure I wasn't prematurely optimizing. Guy adding 500 kb of js when you need 1 kb of it is obviously a horrible idea, especially when you're already way over the performance budget. Asshat. I'm still salty he got so much airtime for that shitty opinion of his and that I had to spend so much energy defending myself.
Comment by jcgrillo 7 hours ago
Comment by Shorel 5 hours ago
Comment by fragmede 4 hours ago
Comment by Quarrelsome 5 hours ago
Comment by jcgrillo 5 hours ago
Comment by Quarrelsome 4 hours ago
OR, perhaps its the case that different contexts have different levels of effort. Running a spike can be an important way to promote new ideas across an org and show how things can be done differently. It can be a political tool that has positive impact, because there's a lot more to a business than simply writing good code. However if your org is horrible then it can backfire in the way that was described. Maybe business are too aggressive and trample on dev, maybe dev doesn't have a spine, maybe nobody spoke up about what a fucking disaster it was going to be, maybe they did and nobody listened. Those are all organisational issues akin to an exploitable code base but embedded into the org instead of the code.
These issues are not the direct fault of the spike, its the fault of the org, just like the idiot that took your poorly formatted comment and put it on the front page of Vogue.
Comment by jcgrillo 4 hours ago
Comment by Quarrelsome 4 hours ago
I mean I could take a toddlers tricycle and try to take it onto the motorway. Can we blame the toy company for that? It has wheels, it goes forward, its basically a car, right? In the same way a spike is basically something we can ship right now.
Comment by f1shy 7 hours ago
Comment by dimitrios1 6 hours ago
"You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is."
Moreso, in my personal experience, I've seen a few speed hacks cause incorrect behavior on more than one occasion.
Comment by tshaddox 3 hours ago
Which is pretty close to just saying "don't do anything unless you have a good reason for doing it."
Comment by giancarlostoro 5 hours ago
Yeah like, NOT indexing any fields in a database, that'll become a problem very quickly. ;)
Comment by ElectricalUnion 1 hour ago
Comment by austin-cheney 2 hours ago
Comment by tombert 4 hours ago
For example, in Java I usually use ConcurrentHashMap, even in contexts that a regular HashMap might be ok. My reasoning for this is simple: I might want to use it in a multithreaded context eventually and the performance differences really aren't that much for most things; uncontested locks in Java are nearly free.
I've gotten pull requests rejected because regular HashMaps are "faster", and then the comments on the PR ends up with people bickering about when to use it.
In that case, does it actually matter? Even if HashMap is technically "faster", it's not much faster, and maybe instead we should focus on the thing that's likely to actually make a noticeable difference like the forty extra separate blocking calls to PostgreSQL or web requests?
So that's the premature optimization that I think is evil. I think it's perfectly fine at the algorithm level to optimize early.
Comment by xorcist 1 hour ago
Make a (very) good argument, and suggest a realtistic path to change the whole codebase, but don't create inconsistency just because it is "better". It is not.
Comment by tombert 26 minutes ago
It makes no difference to the outside code.
Comment by xorcist 21 minutes ago
Comment by tombert 19 minutes ago
Comment by hunterpayne 1 hour ago
2) Locks are cheap
3) I seriously doubt that the difference between a Map and a ConcurrentHashMap is measurable in your app
Which means that both, the comments on your PRs are irrelevant and you are still going too far in your thread-safety. So you are both wrong.
What you are right about is to focus on network calls.
Comment by tombert 20 minutes ago
ConcurrentHashMap has the advantage of hiding the locking from me and more importantly has the advantage of being correct, and it can still use the same Map interface so if it’s eventually used downstream somewhere stuff like `compute` will work and it will be thread safe without and work with mutexes.
The argument I am making is that it is literally no extra work to use the ConcurrentHashMap, and in my benchmarks with JMH, it doesn’t perform significantly worse in a single-threaded context. It seems silly for anyone to try and save a nanosecond to use a regular HashMap in most cases.
Comment by toast0 6 hours ago
Thinking about the overall design, how its likely to be used, and what the performane and other requirements are before aggregating the frameworks of the day is mature optimization.
Then you build things in a reasonable way and see if you need to do more for performance. It's fun to do more, but most of the time, building things with a thought about performance gets you where you need to be.
The I don't need to think about performance at all camp, has a real hard time making things better later. For most things, cycle counting upfront isn't useful, but thinking about how data will be accessed and such can easily make a huge difference. Things like bulk load or one at a time load are enormous if you're loading lots of things, but if you'll never load lots of things, either works.
Thinking about concurrency, parallelism, and distributed systems stuff before you build is also pretty mature. It's hard to change some of that after you've started.
Comment by Shorel 5 hours ago
I want it in a t-shirt. On billboards. Everywhere :)
Comment by SJC_Hacker 34 minutes ago
Comment by NikolaosC 7 hours ago
Comment by tombert 5 hours ago
I also find it a bit annoying is that most people just make shit up about stuff that is "faster". Instead of measuring and/or looking at the compiled bytecode/assembly, people just repeat tribal knowledge about stuff that is "faster" with no justification. I find that this is common amongst senior-level people at BigCos especially.
When I was working in .NET land, someone kept telling me that "switch statements are faster" than their equivalent "if" statements, so I wrote a very straightforward test comparing both, and used dotpeek to show that they compile to the exact same thing. The person still insisted that switch is "faster", I guess because he had a professor tell him this one time (probably with more appropriate context) and took whatever the professor said as gospel.
Comment by bluGill 4 hours ago
Comment by tombert 4 hours ago
Generally I've found that the penalty, even without contention, is pretty minimal, and it almost always wins under contention.
Comment by bluGill 4 hours ago
Comment by tananaev 8 hours ago
Comment by Sammi 8 hours ago
It's particularly the kind of people who like to say "hur hur don't prematurely optimize" that don't bother writing decent software to begin with and use the term as an excuse to write poor performing code.
Instead of optimizing their code, these people end up making excuses so they can pessimize it instead.
Comment by Shorel 5 hours ago
Comment by bartread 50 minutes ago
I'm actually considering, for the first time since 2013/14 when I worked on a Visual Studio extension, creating a piece of desktop software - and a piece of cross-platform desktop software at that. Given that Microsoft's desktop story has descended into a chaotic mishmash of somewhat conflicting stories, and given it will be a cold day in hell before I choose Electron as the solution to any problem I might have, most likely I will roll with Qt + Rust, or at least Qt + something.
20-odd years ago I might have opted for Java + Swing because I'd done a lot of it and, in fairness to Swing, it's not a bad UI toolkit and widget set. These days I simply prefer the svelte footprint and lower resource reuqirements of a native binary - ideally statically linked too, but I'll live with the dynamic linking Qt's licensing necessitates.
Comment by Sammi 45 minutes ago
Comment by pydry 6 hours ago
Usually those people also have a good old whinge about the premature optimization quote being wrong or misinterpreted and general attitudes to software efficiency.
Not once have I ever seen somebody try to derail a process of "ascertain speed is an issue that should be tackled" -> "profile" -> fix the hot path.
Comment by hunterpayne 1 hour ago
That's because your boss will never in a 1000 years hire the type of dev who can do that. And even if you did, there will be team members who will fight those fixes tooth and nail. And yes, I have a very cynical view of some devs but they earned that through some of the pettiest behavior I have ever seen.
Comment by Jensson 2 hours ago
Many things need to be optimized before you can easily profile them, so at this stage its already too late and your software will forever be slow.
Comment by cstoner 7 hours ago
Your users are not going to notice. Sure, it's faster but it's not focused on the problem.
Comment by davedx 7 hours ago
This doesn't make sense. Why is performance (via architectural choices) more important today than then?
You can build a snappy app today by using boring technology and following some sensible best practices. You have to work pretty hard to need PREMATURE OPTIMIZATION on a project -- note the premature there
Comment by jandrewrogers 6 hours ago
Optimization of bandwidth-bound code is almost purely architectural in nature. Most of our software best practices date from a time when everything was computation-bound such that architecture could be ignored with few bad effects.
Comment by f1shy 7 hours ago
Comment by hunterpayne 55 minutes ago
Comment by Nevermark 6 hours ago
If you are building something with similar practical constraints for the Nth time this is definitely true.
You are inheriting “architecture” from your own memory and/or tools/dependencies that are already well fit to the problem area. The architectural performance/model problem already got a lot of thought.
Lots of problems are like that.
But if you are solving a problem where existing tools do a poor job, you better be thinking about performance with any new architecture.
Comment by kqr 7 hours ago
Comment by paulddraper 6 hours ago
There were fewer available layers of abstraction.
Whether you wrote in ASM, C, or Pascal, there was a lot less variance than writing in Rust, JavaScript, Python.
Comment by ghosty141 8 hours ago
Comment by GuB-42 7 hours ago
SOLID isn't bad, but like the idea of premature optimization, it can easily lead you into the wrong direction. You know how people make fun of enterprise code all the time, that's what you get when you take SOLID too far.
In practice, it tends to lead to a proliferation of interfaces, which is not only bad for performance but also result in code that is hard to follow. When you see a call through an interface, you don't know what code will be run unless you know how the object is initialized.
Comment by dzjkb 8 hours ago
Comment by newsoftheday 7 hours ago
Comment by segmondy 7 hours ago
Comment by tracker1 7 hours ago
SOLID approaches aren't free... beyond that keeping code closer together by task/area is another approach. I'm not a fan of premature abstraction, and definitely prefer that code that relates to a feature live closer together as opposed to by the type of class or functional domain space.
For that matter, I think it's perfectly fine for a web endpoint handler to make and return a simple database query directly without 8 layers of interfaces/classes in between.
Beyond that, there are other approaches to software development that go beyond typical OOP practices. Something, something, everything looks like a nail.
The issues that I have with SOLID/CLEAN/ONION is that they tend to lead to inscrutable code bases that take an exponentially long amount of time for anyone to come close to learning and understanding... Let alone the decades of cruft and dead code paths that nobody bothered to clean up along the way.
The longest lived applications I've ever experienced tend to be either the simplest, easiest to replace or the most byzantine complex monstrosities... and I know which I'd rather work on and support. After three decades I tend to prioritize KISS/YAGNI over anything else... not that there aren't times where certain patterns are needed, so much as that there are more times where they aren't.
I've worked on one, singular, one application in three decades where the abstractions that tend to proliferate in SOLID/CLEAN/ONION actually made sense... it was a commercial application deployed to various govt agencies that had to support MS-SQL, Oracle and DB2 backends. Every, other, time I've seen an excess of database and interface abstractions have been instances that would have been better solved in other, less performance impacting ways. If you only have a single concrete implementation of an interface, you probably don't need that interface... You can inherit/override the class directly for testing.
And don't get me started on keeping unit tests in a completely separate project... .Net actually makes it painful to put your tests with your implementation code. It's one of my few actual critiques about the framework itself, not just how it's used/abused.
Comment by f1shy 6 hours ago
[1] https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
Comment by ghosty141 6 hours ago
This should be the header of the website. I think the core of all these arguments is people thinking they ARE laws that must be followed no matter what. And in that case, yeah that won't work.
Comment by gavmor 6 hours ago
Even his "critique" of Demeter is, essentially, that it focuses on an inconsequential aspect of dysfunction—method chaining—which I consider to be just one sme that leads to the larger principle which—and we, apparently, both agree on this—is interface design.
Comment by sroussey 6 hours ago
Comment by ghosty141 6 hours ago
I think the most important principle above all is knowing when not to stick to them.
For example if I know a piece of code is just some "dead end" in the application that almost nothing depends on then there is little point optimizing it (in an architectural and performance sense). But if I'm writing a core part of an application that will have lots of ties to the rest, it totally does make sense keeping an eye on SOLID for example.
I think the real error is taking these at face value and not factoring in the rest of your problem domain. It's way too simple to think SOLID = good, else bad.
Comment by someguyiguess 8 hours ago
Comment by mrkeen 6 hours ago
The only part of SOLID that is perhaps OO-only is Liskov Substitution.
L is still a good idea, but without object-inheritance, there's less chance of shooting yourself in the foot.
Comment by marcosdumay 7 hours ago
If you follow SOLID, you'll write OOP only, with always present inheritance chains, factories for everything, and no clear relation between parameters and the procedures that use them.
Comment by Exoristos 7 hours ago
Comment by paulddraper 6 hours ago
L and I are both pretty reasonable.
But S and D can easily be taken to excess.
And O seems to suggest OO-style polymorphism instead of ADTs.
Comment by ghosty141 6 hours ago
That's how I view it. You should design your application such that extension involves little modifying of existing code as long as it's not necessary from a behavior or architectural standpoint.
Comment by SAI_Peregrinus 4 hours ago
Comment by jnpnj 7 hours ago
Comment by xnx 8 hours ago
Not if your optimization for performance is some Rube Goldberg assemblage of microservices and an laundry list of AWS services.
Comment by ozim 7 hours ago
Bunch of stuff is done for us. Using postgres having indexes correct - is not premature optimization, just basic stuff to be covered.
Having double loop is quadratic though. Parallelism is super fun because it actually might make everything slower instead of faster.
Comment by cogman10 8 hours ago
And as I point out, what Knuth was talking about in terms of optimization was things like loop unrolling and function inlining. Not picking the right datastructure or algorithm for the problem.
I mean, FFS, his entire book was about exploring and picking the right datastructures and algorithms for problems.
Comment by vanguardanon 5 hours ago
Comment by throwaway5752 7 hours ago
Decades in, this is the worst of all of them. Misused by laziness or malice, and nowhere near specific enough.
The graveyard of companies boxed in by past poor decisions is sprawling. And the people that made those early poor decisions bounce around field talking about their "successful track record" of globally poor and locally good architectural decisions that others have had to clean up.
It touches on a real problem, though, but it should be stricken form the record and replaced with a much better principle. "Design to the problem you have today and the problems you have in 6 months if you succeed. Don't design to the problems you'll have have next year if it means you won't succeed in 6 months" doesn't roll off the tongue.
Comment by tracker1 7 hours ago
One thing that came out of the no-sql/new-sql trends in the past decade and a half is that joins are the enemy of performance at scale. It really helps to know and compromise on db normalization in ways such as leaning on JSON/XML for non-critical column data as opposed to 1:1/children/joins a lot of the time. For that matter, pure performance and vertical scale have shifted a lot of options back from the brink of micro service death by a million paper cuts processes.
Comment by theLiminator 2 hours ago
I don't blame Knuth, he's talking about focusing on micro-optimizations, but a lot of devs nowadays don't even care to get basic performance right.
Comment by dec0dedab0de 6 hours ago
Comment by kgwxd 4 hours ago
Comment by dorkitude 5 hours ago
Comment by tonymet 8 hours ago
Comment by enraged_camel 8 hours ago
You are right about the origin of and the circumstances surrounding the quote, but I disagree with the conclusion you've drawn.
I've seen engineers waste days, even weeks, reaching for microservices before product-market fit is even found, adding caching layers without measuring and validating bottlenecks, adding sharding pre-emptively, adding materialized views when regular tables suffice, paying for edge-rendering for a dashboard used almost entirely by users in a single state, standing up Kubernetes for an internal application used by just two departments, or building custom in-house rate limiters and job queues when Sidekiq or similar solutions would cover the next two years.
One company I consulted for designed and optimized for an order of magnitude more users than were in the total addressable market for their industry! Of that, they ultimately managed to hit only 3.5%.
All of this was driven by imagined scale rather than real measurements. And every one of those choices carried a long tail: cache invalidation bugs, distributed transactions, deployment orchestration, hydration mismatches, dependency array footguns, and a codebase that became permanently harder to change. Meanwhile the actual bottlenecks were things like N+1 queries or missing indexes that nobody looked at because attention went elsewhere.
Comment by cstoner 7 hours ago
I was quite literally asked to implement an in-memory cache to avoid a "full table scan" caused by a join to a small DB table recently. Our architect saw "full table scans" in our database stats and assumed that must mean a performance problem. I feel like he thought he was making a data-driven profiling decision, but seemed to misunderstand that a full-table scan is faster for a small table than a lookup. That whole table is in RAM in the DB already.
So now we have a complex Redis PubSub cache invalidation strategy to save maybe a ms or two.
I would believe that we have performance problems in this chunk of code, and it's possible an in-memory cache may "fix" the issue, but if it does, then the root of the problem was more likely an N+1 query (that an in-memory cache bandaids over). But by focusing on this cache, suddenly we have a much more complex chunk of code that needs to be maintained than if we had just tracked down the N+1 query and fixed _that_
Comment by Esophagus4 6 hours ago
Yes. When I was a young engineer, I was asked to design something for a scale we didn’t even get close to achieving. Eventual consistency this, event driven conflict resolution that… The service never even went live because by the time we designed it, everyone realized it was a waste of time.
I learned it makes no sense to waste time designing for zillions of users that might never come. It’s more important to have an architecture that can evolve as needs change rather than one that can see years into the future (that may never come).
Comment by jollyllama 8 hours ago
Comment by dartharva 5 hours ago
Comment by m3kw9 5 hours ago
Comment by tehjoker 8 hours ago
In these domains, algorithm selection, and fine tuning hot spots pays off significantly. You must hit minimum speeds to make your application viable.
Comment by EGreg 8 hours ago
Comment by CyberDildonics 6 hours ago
Anyone who has done optimization even a little knows that it isn't very difficult, but you do need to plan and architect for it so you don't have to restructure you whole program to get it to run well.
Mostly it's just rationalization, people don't know the skill so they pretend it's not worth doing and their users suffer for it.
If software and website were even reasonably optimized people could just use a computer as powerful as a rasberry pi 5 (except for high res video) for most of what they do day to day.
Comment by snarfy 7 hours ago
Comment by Aaargh20318 10 hours ago
“A variable should mean one thing, and one thing only. It should not mean one thing in one circumstance, and carry a different value from a different domain some other time. It should not mean two things at once. It must not be both a floor polish and a dessert topping. It should mean One Thing, and should mean it all of the time.”
Comment by inetknght 10 hours ago
I worked as a janitor for four years near a restaurant, so I know a little bit about floor polishing and dessert toppings. This law might be a little less universal than you think. There are plenty of people who would happily try out floor polish as a dessert topping if they're told it'll get them high.
Comment by otterley 9 hours ago
It probably won’t be up very long but it’s a classic.
Comment by dhosek 7 hours ago
I’m still waiting for the moment in the ice cream shop when I can ask them, “sugar or plain?” https://mediaburn.org/videos/sugar-or-plain/
Comment by gpderetta 7 hours ago
Comment by inetknght 8 hours ago
Comment by rapnie 9 hours ago
Comment by aworks 9 hours ago
Comment by inetknght 8 hours ago
It definitely revealed a lot of falsehoods and stereotypes.
Comment by rapnie 9 hours ago
Comment by js8 8 hours ago
Comment by inetknght 8 hours ago
Comment by CyberDildonics 6 hours ago
I think that would be called a drug, not a desert topping.
Comment by shermantanktop 8 hours ago
Comment by pc86 6 hours ago
Comment by sdeiley 55 minutes ago
Comment by estimator7292 5 hours ago
Used to be, anyway. Modern alternatives are much better. It's still used as glue in wind instruments though.
Comment by ipnon 10 hours ago
Comment by huflungdung 10 hours ago
Comment by galaxyLogic 6 minutes ago
I wonder if it should be called "Law of Leaky Metaphors" instead. Metaphor is not the same thing as Abstraction. I can understand a "leaky metaphor" as something that does not quite make it, at least not in all aspects. But what would be a good EXAMPLE of a Leaky Abstraction?
Comment by conartist6 11 hours ago
Comment by jimmypk 10 hours ago
The resolution I've landed on: be strict in what you accept at boundaries you control (internal APIs, config parsing) and liberal only at external boundaries where you can't enforce client upgrades. But that heuristic requires knowing which category you're in, which is often the hard part.
Comment by physicles 6 hours ago
If I accidentally accept bad input and later want to fix that, I could break long-time API users and cause a lot of human suffering. If my input parsing is too strict, someone who wants more liberal parsing will complain, and I can choose to add it before that interaction becomes load-bearing (or update my docs and convince them they are wrong).
The stark asymmetry says it all.
Of course, old clients that can’t be upgraded have veto power over any changes that could break them. But that’s just backwards compatibility, not Postel’s Law.
Source: I’m on a team that maintains a public API used by thousands of people for nearly 10 years. Small potatoes in internet land but big enough that if you cause your users pain, you feel it.
Comment by zffr 5 hours ago
Over time the paths may change, and this can break existing links. IMO websites should continue to accept old paths and redirect to the new equivalents. Eventually the redirects can be removed when their usage drops low enough.
Comment by zaphar 5 hours ago
Comment by ragnese 1 hour ago
So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.
Comment by ryandrake 3 hours ago
Comment by zaphar 2 hours ago
Comment by nothrabannosir 8 hours ago
Comment by zahlman 10 hours ago
Comment by dmoy 5 hours ago
Comment by throwaway173738 10 hours ago
Comment by ragnese 1 hour ago
Hyrum's Law is pointing out that sometimes the new field is a breaking change in the liberal scenario as well, because if you used to just ignore the field before and now you don't, your client that was including it before will see a change in behavior now. At least by being strict, (not accepting empty arrays, extra fields, empty strings, incorrect types that can be coerced, etc), you know that expanding the domain of valid inputs won't conflict with some unexpected-but-previously-papered-over stuff that current clients are sending.
Comment by astrobe_ 6 hours ago
Bottom line: it's all a matter of balance of powers. If you're the smaller guy in the equation, you'll be "Postel'ed" anyway.
Yet Postel's law is still in the "the road to hell is paved with good intentions" category, for the reason you explain very well (AKA XKCD #1172 "Workflow"). Wikipedia even lists a couple of major critics about it [1].
Comment by jimbokun 4 hours ago
Comment by someguyiguess 8 hours ago
Comment by AussieWog93 10 hours ago
I've seen CompSci guys especially (I'm EEE background, we have our own problems but this ain't one of them) launch conceptual complexity into the stratosphere just so that they could avoid writing two separate functions that do similar things.
Comment by busfahrer 10 hours ago
Comment by michaelcampbell 9 hours ago
Comment by mcv 10 hours ago
Comment by whattheheckheck 10 hours ago
Take the 5 Rings approach.
The purpose of the blade is to cut down your opponent.
The purpose of software is to provide value to the customer.
It's the only thing that matters.
You can also philosophize why people with blades needed to cut down their opponents along with why we have to provide value to the customer but thats beyond the scope of this comment
Comment by marcosdumay 7 hours ago
If you write a lot of code, the odds of something repeating in another place just by coincidence are quite large. But the odds of the specific code that repeated once repeating again are almost none.
That's a basic rule from probability that appears in all kinds of contexts.
Anyway, both DRY and WET assume the developers are some kind ignorant automaton that can't ever know the goal of their code. You should know if things are repeating by coincidence or not.
Comment by jimbokun 4 hours ago
Comment by ta20240528 10 hours ago
Partially correct. The purpose of your software to its owners is also to provide future value to customers competitively.
What we have learnt is that software needs to be engineered: designed and structured.
Comment by nradov 8 hours ago
Comment by shermantanktop 8 hours ago
Making software is a back-of-house function, in restaurant terms. Nobody out there sees it happen, nobody knows what good looks like, but when a kitchen goes badly wrong, the restaurant eventually closes.
Comment by galbar 8 hours ago
This is a very costly way of developing software.
Comment by nradov 8 hours ago
Comment by datadrivenangel 6 hours ago
I've been at organizations that don't think engineers should write tests because it takes too much time and slows them down...
Comment by lamasery 8 hours ago
The "who gives a shit, we'll just rewrite it at 100x the cost" approach to quality is very particular to the software startup business model, and doesn't work elsewhere.
Comment by jimbokun 4 hours ago
Comment by ericmcer 6 hours ago
Comment by aworks 9 hours ago
Comment by zahlman 10 hours ago
Comment by mosburger 9 hours ago
The key is to avoid the temptation to DRY when things are only slightly different and find a balance between reuse and "one function/class should only do one thing."
Comment by physicles 6 hours ago
One of my favorite things as a software engineer is when you see the third example of a thing, it shows you the problem from a different angle, and you can finally see the perfect abstraction that was hiding there the whole time.
Comment by dasil003 9 hours ago
My view is over-engineering comes from the innate desire of engineers to understand and master complexity. But all software is a liability, every decision a tradeoff that prunes future possibilities. So really you want to make things as simple as possible to solve the problem at hand as that will give you more optionality on how to evolve later.
Comment by onionisafruit 8 hours ago
Comment by caminante 9 hours ago
The spectrum is [YAGNI ---- DRY]
A little less abstract: designing a UX comes to mind. It's one thing to make something workable for you, but to make it for others is way harder.
Comment by markburns 8 hours ago
Yes the initial HTML looked similar in these few places, and the resultant usage of the abstraction did not look similar.
But it took a very long time reading each place a table existed and quite a bit longer working out how to get it to generate the small amount of HTML you wanted to generate for a new case.
Definitely would have opted for repetition in this particular scenario.
Comment by iwontberude 10 hours ago
Comment by pydry 10 hours ago
The goal ought to be to aim for a local minima of all of these qualities.
Some people just want to toss DRY away entirely though or be uselessly vague about when to apply it ("use it when it makes sense") and thats not really much better than being a DRY fundamentalist.
Comment by layer8 10 hours ago
Comment by xnorswap 10 hours ago
A common "failure" of DRY is coupling together two things that only happened to bear similarity while they were both new, and then being unable to pick them apart properly later.
Comment by CodesInChaos 9 hours ago
Which is often caused by the "midlayer mistake" https://lwn.net/Articles/336262/
Comment by mosburger 9 hours ago
Yeah there are ways to avoid this and you need to strike balances, but sometimes you have to be careful and resist the temptation to DRY everything up 'cuz you might just make it brittler (pun intended).
Comment by Silamoth 9 hours ago
Comment by gavmor 7 hours ago
Comment by mcv 10 hours ago
Comment by mjr00 9 hours ago
The tricky part is that sometimes "a new thing" is really "four new things" disguised as one. A database table is a great example because it's a failure mode I've seen many times. A developer has to do it once and they have to add what they perceive as the same thing four times: the database table itself, the internal DB->code translation e.g. ORM mapping, the API definition, and maybe a CRUD UI widget. The developer thinks, "oh, this isn't DRY" and looks to tools like Alembic and PostGREST or Postgraphile to handle this end-to-end; now you only need to write to one place when adding a database table, great!
It works great at first, then more complex requirements come down: the database gets some virtual generated columns which shouldn't be exposed in code, the API shouldn't return certain fields, the UI needs to work off denormalized views. Suddenly what appeared to be the same thing four times is now four different things, except there's a framework in place which treats these four things as one, and the challenge is now decoupling them.
Thankfully most good modern frameworks have escape valves for when your requirements get more complicated, but a lot of older ones[0] really locked you in and it became a nightmare to deal with.
[0] really old versions of Entity Framework being the best/worst example.
Comment by mcv 9 hours ago
But the code I'm talking about is really adding the same thing in 4 different places: the constant itself, adding it to a type, adding it to a list, and there was something else. It made it very easy to forget one step.
Comment by pydry 10 hours ago
There should often be two points of truth because having one would increase the coupling cost more than the benefits that would be derived from deduplication.
Comment by davedx 7 hours ago
So much SWE is overengineering. Just like this website to be honest. You don't get away with all that bullshit in other eng professions where your BoM and labour costs are material.
Comment by ryandrake 3 hours ago
Comment by blandflakes 10 hours ago
Which maybe is also fine, I dunno :)
Comment by rustyhancock 9 hours ago
It can be quite hard to explain when a student asks why you did something a particular way. The truthful answer is that it felt like the right way to go about it.
With some thought you can explain it partly - really justify the decision subconsciously made.
If they're asking about a conscious decision that's rarely much more helpful that you having to say that's what the regulations, or guidelines say.
Where they really learn is seeing those edge cases and gray areas
Comment by rapnie 9 hours ago
Comment by httpz 2 hours ago
Comment by mday27 2 hours ago
Comment by jolt42 7 hours ago
Comment by alok-g 4 hours ago
Saying this is like saying 'pick the optimum point' without saying anything about how to find the optimum point. This cannot be a law, it is the definition of optimum.
Note that optimum point need not be somewhere in the middle or 'inside', like a maxima. The optimum point could very well be on an extreme of the domain (input variables space).
Comment by zdc1 6 hours ago
Comment by ghm2180 11 hours ago
Comment by Silamoth 9 hours ago
Comment by diehunde 9 hours ago
Comment by ericmcer 6 hours ago
Reading through the list mostly made me feel sad. You can't help but interpret these through the modern lens of AI assisted coding. Then you wonder if learning and following (some) of these for the last 20 years is going to make you a janitor for a bunch of AI slop, or force you into a coding style where these rules are meaningless, or make you entirely irrelevant.
Comment by ChrisMarshallNY 8 hours ago
Sort of like a real code of law.
Comment by deaux 7 hours ago
- Every website will be vibecoded using Claude Opus
This will result in the following:
- The background color will be a shade of cream, to properly represent Anthropic
- There will be excessive use of different fonts and weights on the same page, as if a freshman design student who just learned about typography
- There will be an excess of cards in different styles, a noteworthy amount of which has a colored, round border either on hover or by default on exactly one side of the card
Comment by shimman 4 hours ago
Comment by dabedee 6 hours ago
Comment by hunterpayne 41 minutes ago
"The meta-law of software engineering: All laws of software engineering will be immediately misinterpreted and mindlessly applied in a way that would horrify their originators. Now that we can observe the behaviour of LLMs that are missing key context, we can understand why."
Or, you can't boil down decades of wisdom and experience into a pity, 1 sentence quote.
Comment by dataviz1000 11 hours ago
"In analyzing complexity, fast iteration almost always produces better results than in-depth analysis."
Boyd invented the OODA loop.
Comment by computerdork 3 hours ago
And what a great and very subtle example with the fighter jet control sticks. This reminds of a build time issue I once had. Yeah, way back in college, did really poorly on a final programming project, because didn't realize you were supposed to swap out a component they had you write with a mock component that was provided for you - hard to explain, but they wanted you to write this component to show you could, but once you did, you weren't supposed to use it, because it was extremely slow to build. So they also gave you a mock version to use when working on the code of your main system.
Using my full component killed my build time, as it took 10 minutes to build instead of a few seconds, and it was the one school programming project I couldn't finish before the deadline and was super stressful. Was a very painful lesson but ever since have always found ways to shorten my build times.
Comment by Silamoth 9 hours ago
Comment by devsda 7 hours ago
Comment by lqstuart 7 hours ago
Comment by Bratmon 2 hours ago
(Wikipedia nerds often say "No, anyone can create a page as long as they follow the 137 guidelines!" This is a prank- Wikipedia admins will delete your article no matter how many guidelines it follows)
Comment by tjohnell 5 hours ago
If it can be slopped, it will be slopped.
Comment by niccl 1 hour ago
Comment by t43562 3 hours ago
"Every application has an inherent amount of irreducible complexity that can only be shifted, not eliminated."
But then in the explanation seems to me to devolve down to a trite suggestion not to burden your users. This doesn't interest me because users need the level of complexity they need and no more whatever you're doing and making it less causes your application to be an unflexible toy. So this is all, to a degree, obvious.I think it's more useful to remember when you're refactoring that if you try to make one bit of a system simpler then you often just make another part more complex. Why write something twice to end up with it being just as bad the other way round?
Comment by hatsix 7 hours ago
Comment by ericmcer 6 hours ago
I always liked the fence story better though.
Comment by computerdork 7 hours ago
Comment by meken 8 hours ago
> "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it"
Comment by pkasting 7 hours ago
Asking "who wrote this stupid code?" will retroactively travel back in time and cause it to have been you.
Comment by omoikane 6 hours ago
- Jeremy Clarkson (Top Gear, series 14 episode 5)
Comment by RivieraKid 10 hours ago
Structure code so that in an ideal case, removing a functionality should be as simple as deleting a directory or file.
Comment by layer8 10 hours ago
Comment by danparsonson 10 hours ago
Comment by RivieraKid 8 hours ago
Imagine the code as a graph with nodes and edges. The nodes should be grouped in a way that when you display the graph with grouped nodes, you see few edges between groups. Removing a group means that you need to cut maybe 3 edges, not 30. I.e. you don't want something where every component has a line to every other component.
Also when working on a feature - modifying / adding / removing, ideally you want to only look at an isolated group, with minimal links to the rest of the code.
Comment by ActivePattern 5 hours ago
Comment by MarkLowenstein 5 hours ago
Comment by voiceofunreason 6 hours ago
Comment by kijin 10 hours ago
For example, each comment on HN has a line on top that contains buttons like "parent", "prev", "next", "flag", "favorite", etc. depending on context. Suppose I might one day want to remove the "flag" functionality. Should each button be its own file? What about the "comment header" template file that references each of those button files?
Comment by jpitz 10 hours ago
Comment by sverhagen 10 hours ago
Comment by skydhash 8 hours ago
Comment by dhosek 7 hours ago
This in itself might not be enough to justify this, but the fewer files will lead to more challenges in a collaborative environment (I’d also note that more small files will speed up incremental compilations since unchanged code is less likely to get recompiled which is one reason why when I do JVM dev, I never really think about compilation time—my IDE can recompile everything quickly in the background without my noticing).
Comment by skydhash 5 hours ago
You got a point for incremental compilation. But fewer files (done well) is not really a challenge as everything is self contained. It makes it easier to discern orthogonal features as the dependency graph is clearer. With multiple files you find often that similar things are assumed to be identical and used as such. Then it’s a big refactor when trying to split them, especially if they are foundational.
Comment by ryanshrott 2 hours ago
People bring it up to argue for never thinking about performance, which flips the intent on its head. The real takeaway is that you need to spot that critical 3% early enough to build around it, and that means doing some optimization thinking up front, not none at all.
Comment by fenomas 10 hours ago
> Fen's law: copy-paste is free; abstractions are expensive.
edit: I should add, this is aimed at situations like when you need a new function that's very similar to one you already have, and juniors often assume it's bad to copy-paste so they add a parameter to the existing function so it abstracts both cases. And my point is: wait, consider the cost of the abstraction, are the two use cases likely to diverge later, do they have the same business owner, etc.
Comment by ndr 10 hours ago
> 11. Abstractions don’t remove complexity. They move it to the day you’re on call.
Comment by Symmetry 6 hours ago
Comment by Xiaoher-C 10 hours ago
Comment by Kinrany 8 hours ago
Comment by newsoftheday 7 hours ago
Comment by sov 5 hours ago
Comment by detectivestory 8 hours ago
Comment by causal 8 hours ago
Comment by AtNightWeCode 4 hours ago
I think it is better to have real requirements like: The code needs to be testable in a simple way.
Comment by heap_perms 8 hours ago
Comment by ozgrakkurt 11 hours ago
Comment by ghm2180 11 hours ago
Comment by pratikdeoghare 54 minutes ago
Really great book even if don’t care about lisp or ai.
Comment by devgoncalo 10 hours ago
Comment by WillAdams 10 hours ago
Comment by ozgrakkurt 11 hours ago
Comment by azath92 11 hours ago
Comment by newsoftheday 7 hours ago
Comment by davery22 8 hours ago
- Shirky Principle: Institutions will try to preserve the problem to which they are the solution
- Chesterton's Fence: Changes should not be made until the reasoning behind the current state of affairs is understood
- Rule of Three: Refactoring given only two instances of similar code risks selecting a poor abstraction that becomes harder to maintain than the initial duplication
Comment by asmodeuslucifer 5 hours ago
(~150) is the size of a community in which everyone knows each other’s identities and roles.
In anthropology class. You can ask someone to write down the name of everyone they can think of, real or fictional, live or dead and most people will not make it to 250.
Some individuals like professional gossip columnists or some politicians can remember as many as 1,000 people.
Comment by OkayPhysicist 1 hour ago
Comment by austin-cheney 9 hours ago
When it comes to frameworks (any framework) any jargon not explicitly pointing to numbers always eventually reduces down to some highly personalized interpretation of easy.
It is more impactful than it sounds because it implicitly points to the distinction of ultimate goal: the selfish developer or the product they are developing. It is also important to point out that before software frameworks were a thing the term framework just identifies a defined set of overlapping abstract business principles to achieve a desired state. Software frameworks, on the other hand, provide a library to determine a design convention rather than the desired operating state.
Comment by nashashmi 8 hours ago
I had a hard time learning the whole mvc concept
Comment by 4dregress 4 hours ago
I actually had a college run over by a bus on the way to work in London, was very lucky and made a full recovery.
Head poking out under the main exit of the bus.
Comment by nopointttt 1 hour ago
Comment by dassh 10 hours ago
Comment by someguyiguess 8 hours ago
Comment by traderj0e 32 minutes ago
YAGNI and "you will ship the org chart" are the two most commonly useful things to remember, but they aren't laws.
Comment by regular_trash 6 hours ago
Sure, don't add hooks for things you don't immediately need. But if you are reasonably sure a feature is going to be required at some point, it doesn't hurt to organize and structure your code in a way that makes those hooks easy to add later on.
Worst case scenario, you are wrong and have to refactor significantly to accommodate some other feature you didn't envision. But odds are you have to do that anyway if you abide by YAGNI as dogma.
The amount of times I've heard YAGNI as reasoning to not modularize code is insane. There needs to be a law that well-intentioned developers will constantly misuse and misunderstand the ideas behind these heuristics in surprising ways.
Comment by traderj0e 30 minutes ago
Comment by AtNightWeCode 4 hours ago
Comment by traderj0e 21 minutes ago
Comment by mojuba 11 hours ago
Or develop a skill to make it correct, fast and pretty in one or two approaches.
Comment by AussieWog93 11 hours ago
- Write a correct, pretty implementation
- Beat Claude Code with a stick for 20 minutes until it generated a fragile, unmaintainable mess that still happened to produce the same result but in 300ms rather than 2500ms. (In this step, explicitly prompting it to test rather than just philosophising gets you really far)
- Pull across the concepts and timesaves from Claude's mess into the pretty code.
Seriously, these new models are actually really good at reasoning about performance and knowing alternative solutions or libraries that you might have only just discovered yourself.
Comment by mojuba 10 hours ago
But yes, the scope and breadth of their knowledge goes far beyond what a human brain can handle. How many relevant facts can you hold in your mind when solving a problem? 5? 12? An LLM can take thousands of relevant facts into account at the same time, and that's their superhuman ability.
Comment by theandrewbailey 10 hours ago
Comment by tmoertel 11 hours ago
complexity(system) =
sum(complexity(component) * time_spent_working_in(component)
for component in system).
The rule suggests that encapsulating complexity (e.g., in stable libraries that you never have to revisit) is equivalent to eliminating that complexity.Comment by stingraycharles 11 hours ago
Comment by tmoertel 11 hours ago
> complexity is not a function of time spent working on something.
But the complexity you observe is a function of your exposure to that complexity.
The notion of complexity exists to quantify the degree of struggle required to achieve some end. Ousterhout’s observation is that if you can move complexity into components far away from where you must do your work to achieve your ends, you no longer need to struggle with that complexity, and thus it effectively is not there anymore.
Comment by wduquette 10 hours ago
Comment by CuriouslyC 9 hours ago
Comment by Brian_K_White 10 hours ago
Comment by skydhash 8 hours ago
That’s pretty much what good design is about. Your solve a foundational problems and now no one else needs to think about it (including you when working on some other parts).
Comment by someguyiguess 8 hours ago
Comment by tmoertel 4 hours ago
No, it’s a win, even then.
Say you are writing an operating system, and one of the fundamental data structures you use all over the place is a concurrency-safe linked list.
Option 1 is to manipulate the relevant instances of the linked list directly—whenever you need to insert, append, iterate over, or delete from any list from any subsystem in your operating system. So you’ll have low-level list-related lock and pointer operations spread throughout the entire code base. Each one of these operations requires you to struggle with the list at the abstraction level and at the implementation level.
Option 2 is to factor out the linked-list operations and isolate them in a small library. Yes, you must still struggle with the list at the abstraction and implementation levels in this one small library, but everywhere else the complexity has been reduced to having to struggle with the abstraction level only, and the abstraction is a small set of straightforward operations that is easy to wrap your head around.
The sole difference between the options, as you wrote, is that “the complexity just migrates to another location.” But which would you rather maintain?
That was Ousterout’s point.
Comment by serious_angel 10 hours ago
Comment by biscuits1 7 hours ago
Then I committed the code and let the second AI review it. It too had no problem with goto's.
Claude's Law: The code that is written by the agent is the most correct way to write it.
Comment by voiceofunreason 6 hours ago
Comment by r0ze-at-hn 11 hours ago
9. Most software will get at most one major rewrite in its lifetime.
Comment by xorcist 2 hours ago
Comment by TheGRS 6 hours ago
Comment by WillAdams 11 hours ago
A couple are well-described/covered in books, e.g., Tesler's Law (Conservation of Complexity) is at the core of _A Philosophy of Software Design_ by John Ousterhout
https://www.goodreads.com/en/book/show/39996759-a-philosophy...
(and of course Brook's Law is from _The Mythical Man Month_)
Curious if folks have recommendations for books which are not as well-known which cover these, other than the _Laws of Software Engineering_ book which the site is an advertisement for.....
Comment by netdevphoenix 10 hours ago
I wish AWS/Azure had this functionality.
Comment by milanm081 9 hours ago
Comment by ChrisMarshallNY 8 hours ago
Where's Chesterton's Fence?
https://en.wiktionary.org/wiki/Chesterton%27s_fence
[EDIT: Ninja'd a couple of times. +1 for Shirky's principle]
Comment by macintux 9 hours ago
* https://martynassubonis.substack.com/p/5-empirical-laws-of-s...
* https://newsletter.manager.dev/p/the-unwritten-laws-of-softw..., which linked to:
* https://newsletter.manager.dev/p/the-13-software-engineering...
Comment by Symmetry 10 hours ago
Comment by someguyiguess 8 hours ago
Comment by Sergey777 10 hours ago
Especially things like “every system grows more complex over time” — you can see it in almost any project after a few iterations.
I think the real challenge isn’t knowing these laws, but designing systems that remain usable despite them.
Comment by darccio 4 hours ago
Comment by sigma5 8 hours ago
Comment by renticulous 8 hours ago
Applies to opensource. But it also means that code reviews are a good thing. Seniors can guide juniors to coax them to write better code.
Comment by tfrancisl 11 hours ago
Comment by toolslive 2 hours ago
Comment by lifeisstillgood 5 hours ago
As JFK never said:
“””We do these things, not because they are easy,
But because we thought they would be easy”””
Comment by noduerme 9 hours ago
My bet is on the long arc of the universe trending toward complexity... but in spite of all this, I don't think all this complexity arises from a simple set of rules, and I don't think Gall's law holds true. The further we look at the rule-set for the universe, the less it appears to be reducible to three or four predictable mechanics.
Comment by dgb23 10 hours ago
While browsing it, I of course found one that I disagree with:
Testing Pyramid: https://lawsofsoftwareengineering.com/laws/testing-pyramid/
I think this is backwards.
Another commenter WillAdams has mentioned A Philosophy of Software Design (which should really be called A Set of Heuristics for Software Design) and one of the key concepts there are small (general) interfaces and deep implementations.
A similar heuristic also comes up in Elements of Clojure (Zachary Tellman) as well, where he talks about "principled components and adaptive systems".
The general idea: You should greatly care about the interfaces, where your stuff connects together and is used by others. The leverage of a component is inversely proportional to the size of that interface and proportional to the size of its implementation.
I think the way that connects to testing is that architecturally granular tests (down the stack) is a bit like pouring molasses into the implementation, rather than focusing on what actually matters, which is what users care about: the interface.
Now of course we as developers are the users of our own code, and we produce building blocks that we then use to compose entire programs. Having example tests for those building blocks is convenient and necessary to some degree.
However, what I want to push back on is the implied idea of having to hack apart or keep apart pieces so we can test them with small tests (per method, function etc.) instead of taking the time to figure out what the surface areas should be and then testing those.
If you need hyper granular tests while you're assembling pieces, then write them (or better: use a REPL if you can), but you don't need to keep them around once your code comes together and you start to design contracts and surface areas that can be used by you or others.
Comment by nazgul17 9 hours ago
Comment by wesselbindt 10 hours ago
- Not realizing it's a very concrete theorem applicable in a very narrow theoretical situation, and that its value lies not in the statement itself but in the way of thinking that goes into the proof.
- Stating it as "pick any two". You cannot pick CA. Under the conditions of the CAP theorem it is immediately obvious that CA implies you have exactly one node. And guess what, then you have P too, because there's no way to partition a single node.
A much more usable statement (which is not a theorem but a rule of thumb) is: there is often a tradeoff between consistency and availability.
Comment by traderj0e 26 minutes ago
Comment by urxvtcd 9 hours ago
Comment by cientifico 7 hours ago
The UX pyramid but applied to DX.
It basically states that you should not focus in making something significant enjoyable or convenient if you don't have something that is usable, reliable or remotely functional.
Comment by hintymad 6 hours ago
Comment by superxpro12 9 hours ago
Comment by compiler-guy 8 hours ago
Comment by bpavuk 10 hours ago
ha, someone needs to email Netlify...
Comment by arnorhs 9 hours ago
https://web.archive.org/web/20260421113202/https://lawsofsof...
Comment by milanm081 9 hours ago
Comment by alsetmusic 2 hours ago
> The less you know about something, the more confident you tend to be.
From the first line on the wiki article:
> systematic tendency of people with low ability in a specific area to give overly positive assessments of this ability.
Or, said another way, the more you know about something the more complexities you're aware of and the better assessment you can make about topics involving such. At least, that's how I understand it in a nutshell without explaining the experiments run and the observations that led to the findings.
Comment by matt765 1 hour ago
Comment by computerdork 6 hours ago
Because rewriting old complex code is way more time consuming that you think it'll be. You have to add not only in the same features, but all the corner cases that your system ran into in the past.
Have seen this myself. A large team spent an entire year of wasted effort on a clean rewrite of an key system (shopping cart at a high-volume website) that never worked... ...although, in the age of AI, wonder if a rewrite would be easier than in the past. Still, guessing even then, it'd be better if the AI refactored it first as a basis for reworking the code, as opposed to the AI doing a clean rewrite of code from the start.
Comment by namenotrequired 6 hours ago
Comment by computerdork 5 hours ago
As you probably know, there is a tendency when new developers join a team to hate the old legacy code - one of the toughest skills is being able to read someone else's code - so they ask their managers to throw it away and rewrite it. This is rarely worth it and often results in a lot of time being spent recreating fixes for old bugs and corner cases. Much better use of time to try refactoring the existing code first.
Although, can see why you mentioned it from the initial example that I gave (on that rewrite of the shopping cart) which is also covered by the "second system effect." Yeah, thinking back, have seen this too. Overdesign can get really out of hand and becomes really annoying to wade through all that unnecessary complexity whenever you need to make a change.
Comment by ebonnafoux 9 hours ago
> The first 90% of the code accounts for the first 90% of development time; the remaining 10% accounts for the other 90%.
It should be 90% code - 10% time / 10% code - 90% time
Comment by Edman274 9 hours ago
Comment by ebonnafoux 8 hours ago
Comment by amelius 2 hours ago
Comment by pcblues 7 hours ago
Relax. You will make all the mistakes because the laws don't make sense until you trip over them :)
Comment your code? Yep. Helped me ten years later working on the same codebase.
You can't read a book about best practises and then apply them as if wisdom is something you can be told :)
It is like telling kids, "If you do this you will hurt yourself" YMMV but it won't :)
Comment by herodotus 9 hours ago
Comment by jaggederest 6 hours ago
Comment by HoldOnAMinute 5 hours ago
Comment by bofia 3 hours ago
Comment by vpol 11 hours ago
Comment by JensRantil 9 hours ago
Comment by AtNightWeCode 1 hour ago
Comment by d--b 11 hours ago
> Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.
Comment by tgv 10 hours ago
Anyway, the list seems like something AI scraped and has a strong bias towards "gotcha" comments from the likes of reddit.
Comment by smikhanov 8 hours ago
This one belongs to history books, not to the list of contemporary best practices.
Comment by Antibabelic 11 hours ago
Comment by horsawlarway 10 hours ago
Look, I understand the intent you have, and I also understand the frustration at the lack of care with which many companies have acted with regards to personal data. I get it, I'm also frustrated.
But (it's a big but)...
Your suggestion is that we hold people legally responsible and culpable for losing a confrontation against another motivated, capable, and malicious party.
That's... a seriously, seriously, different standard than holding someone responsible for something like not following best practices, or good policy.
It's the equivalent of killing your general when he loses a battle.
And the problem is that sometimes even good generals lose battles, not because they weren't making an honest effort to win, or being careless, but because they were simply outmatched.
So to be really, really blunt - your proposal basically says that any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data. That's not good policy, period.
Comment by Antibabelic 10 hours ago
Comment by horsawlarway 9 hours ago
What we don't do in engineering is hold the engineer responsible when Russia bombs the bridge.
What you're suggesting is that we hold the software engineer responsible when Russia bombs their software stack (or more realistically, just plants an engineer on the team and leaks security info, like NK has been doing).
Basically - I'm saying you're both wrong about lacking standards, and also suggesting a policy that punishes without regard for circumstance. I'm not saying you're wrong to be mad about general disregard for user data, but I'm saying your "simple and clear" solution is bad.
... something something... for every complex problem there is an answer that is clear, simple, and wrong.
France killed their generals for losing. It was terrible policy then and it's terrible policy now.
Comment by fineIllregister 9 hours ago
Comment by horsawlarway 9 hours ago
Ex - MMG for 2026 was prosecuted because:
- They failed to notify in response to a breach.
- They failed to complete proper risk analysis as required by HIPAA
They paid 10k in fines.
It wasn't just "They had a data breach" (ops proposal...) it was "They failed to follow standards which led to a data breach where they then acted negligently"
In the same way that we don't punish an architect if their building falls over. We punish them if the building falls over because they failed to follow expected standards.
Comment by jcgrillo 7 hours ago
Comment by datadrivenangel 5 hours ago
Comment by jcgrillo 9 hours ago
No. Not the company, holding companies responsible doesn't do much. The engineer who signed off on the system needs to be held personally liable for its safety. If you're a licensed civil engineer and you sign off on a bridge that collapses, you're liable. That's how the real world works, it should be the same for software.
Comment by horsawlarway 9 hours ago
Comment by jcgrillo 9 hours ago
These kinds of failures are not inevitable. We can build sociotechnical systems and practices that prevent them, but until we're held liable--until there's sufficient selection pressure to erode the "move fast and break shit" culture--we'll continue to act negligently.
Comment by horsawlarway 8 hours ago
It seems like your issue is that we don't hold all companies to those standards. But I'm personally ok with that. In the same way I don't think residential homes should be following commercial construction standards.
Comment by jcgrillo 8 hours ago
That doesn't worry me overly much.
> What do you think SOC 2 type 2 and ISO 27001 are?
They're compliance frameworks that have little to no consequences when they're violated, except for some nebulous "loss of trust" or maybe in extreme cases some financial penalties. The problem is the expectation value of the violation penalty isn't sufficient to change behavior. Companies still ship code which violates these things all the time.
> It seems like your issue is that we don't hold all companies to those standards.
Yes, and my issue is that we don't hold engineers personally liable for negligent work.
> I don't think residential homes should be following commercial construction standards.
Sure, there are different gradations of safety standards, but often residential construction plans require sign-off by a professional engineer. In the case when an engineer negligently signs off on an unsafe plan, that engineer is liable. Should be exactly the same situation in software.
Comment by yesitcan 7 hours ago
Comment by serious_angel 6 hours ago
Comment by datadrivenangel 5 hours ago
Comment by Divergence42 5 hours ago
Comment by grahar64 11 hours ago
Comment by stingraycharles 11 hours ago
It’s not a great list. The good old c2.com has many more, better ones.
Comment by layer8 10 hours ago
Comment by Waterluvian 9 hours ago
Comment by Divergence42 5 hours ago
Comment by bronlund 10 hours ago
Comment by samuelknight 6 hours ago
Comment by exiguus 6 hours ago
Comment by serious_angel 6 hours ago
Comment by cogman10 9 hours ago
> Premature Optimization (Knuth's Optimization Principle)
> Another example is prematurely choosing a complex data structure for theoretical efficiency (say, a custom tree for log(N) lookups) when the simpler approach (like a linear search) would have been acceptable for the data sizes involved.
This example is the exact example I'd choose where people wrongly and almost obstinately apply the "premature optimization" principles.
I'm not saying that you should write a custom hash table whenever you need to search. However, I am saying that there's a 99% chance your language has an inbuilt and standard datastructure in it's standard library for doing hash table lookups.
The code to use that datastructure vs using an array is nearly identical and not the least bit hard to read or understand.
And the reason you should just do the optimization is because when I've had to fix performance problems, it's almost always been because people put in nested linear searches turning what could have been O(n) into O(n^3).
But further, when Knuth was talking about actual premature optimization, he was not talking about algorithmic complexity. In fact, that would have been exactly the sort of thing he wrapped into "good design".
When knuth wrote about not doing premature optimizations, he was living in an era where compilers were incredibly dumb. A premature optimization would be, for example, hand unrolling a loop to avoid a branch instruction. Or hand inlining functions to avoid method call overhead. That does make code more nasty and harder to deal with. That is to say, the specific optimizations knuth was talking about are the optimizations compilers today do by default.
I really hate that people have taken this to mean "Never consider algorithmic complexity". It's a big reason so much software is so slow and kludgy.
Comment by krust 6 hours ago
To be fair, a linear search through an array is, most of the time, faster than a hash table for sufficiently small data sizes.
Comment by cogman10 5 hours ago
It doesn't take long for hash or tree lookups to start outperforming linear search and, for small datasets, it's not frequently the case that the search itself is a performance bottleneck.
Comment by bigfishrunning 8 hours ago
Comment by asdfman123 4 hours ago
In most places, people don't follow this rule, as it ensures either you're working an extra 10-20 hours a week to keep things clean, or stuck at mid-level for not making enough impact.
I choose the second option. But I see people who utterly trash the codebase get ahead.
Comment by blauditore 5 hours ago
Comment by 0xbadcafebee 8 hours ago
Here's another law: the law of Vibe Engineering. Whatever you feel like, as long as you vibe with it, is software engineering.
Comment by lenerdenator 8 hours ago
That one's free.
Comment by contingencies 4 hours ago
Comment by eranation 4 hours ago
- NIH
- GIGO
- Rule of 3
Comment by kittikitti 4 hours ago
Comment by James_K 10 hours ago
Comment by duc_minh 10 hours ago
Site not available This site was paused as it reached its usage limits. Please contact the site owner for more information.
Comment by rtrigoso 10 hours ago
Comment by IshKebab 11 hours ago
Comment by _dain_ 11 hours ago
https://lawsofsoftwareengineering.com/laws/premature-optimiz...
It leaves out this part from Knuth:
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.
Knuth thought an easy 12% was worth it, but most people who quote him would scoff at such efforts.
Moreover:
>Knuth’s Optimization Principle captures a fundamental trade-off in software engineering: performance improvements often increase complexity. Applying that trade-off before understanding where performance actually matters leads to unreadable systems.
I suppose there is a fundamental tradeoff somewhere, but that doesn't mean you're actually at the Pareto frontier, or anywhere close to it. In many cases, simpler code is faster, and fast code makes for simpler systems.
For example, you might write a slow program, so you buy a bunch more machines and scale horizontally. Now you have distributed systems problems, cache problems, lots more orchestration complexity. If you'd written it to be fast to begin with, you could have done it all on one box and had a much simpler architecture.
Most times I hear people say the "premature optimization" quote, it's just a thought-terminating cliche.
Comment by hliyan 9 hours ago
Comment by randusername 5 hours ago
- The customer is always right in matters of taste
- Jack of all trades, master of none, but oftentimes better than a master of one
- Curiosity killed the cat, but satisfaction brought it back
- A few bad apples spoil the barrel
- Great minds think alike, though fools seldom differ
Even "pull yourself up by your bootstraps" was originally meant to highlight the absurd futility of a situation.
Comment by dgb23 9 hours ago
I wholeheartedly agree with you here. You mentioned a few architectural/backend issues that emerge from bad performance and introduce unnecessary complexity.
But this also happens in UI: Optimistic updates, client side caching, bundling/transpiling, codesplitting etc.
This is what happens when people always answer performance problems with adding stuff than removing stuff.
Comment by cogman10 8 hours ago
Just a little historic context will tell you what Knuth was talking about.
Compilers in the era of Knuth were extremely dumb. You didn't get things like automatic method inlining or loop unrolling, you had to do that stuff by hand. And yes, it would give you faster code, but it also made that code uglier.
The modern equivalent would be seeing code working with floating points and jumping to SIMD intrinsics or inline assembly because the compiler did a bad job (or you presume it did) with the floating point math.
That is such a rare case that I find the premature optimization quote to always be wrong when deployed. It's always seems to be an excuse to deploy linear searches and to avoid using (or learning?) language datastructures which solve problems very cleanly in less code and much less time (and sometimes with less memory).
Comment by Xiaoher-C 10 hours ago
Comment by rapatel0 8 hours ago
"Before SpaceX, launching rockets was costly because industry practice used expensive materials and discarded rockets after one use. Elon Musk applied first-principles thinking: What is a rocket made of? Mainly aluminum, titanium, copper, and carbon fiber. Raw material costs were a fraction of finished rocket prices. From that insight, SpaceX decided to build rockets from scratch and make them reusable."
Everything including humans are made of cheap materials but that doesn't convey the value. The AI got close to the answer with it's first sentence (re-usability) but it clearly missed the mark.
Comment by andreygrehov 9 hours ago
Comment by Lapsa 7 hours ago
Comment by garff 6 hours ago
Comment by bakkerinho 10 hours ago
Law 0: Fix infra.
Comment by andrerpena 10 hours ago
Comment by mghackerlady 9 hours ago
Comment by asdfasgasdgasdg 9 hours ago
Comment by arnorhs 9 hours ago
If you are saying you _can_ fix 90-99% of performance bottlenecks eventually with caching, that may be true, but doesn't sound as nice
Comment by hermaine 9 hours ago
Comment by jvanderbot 9 hours ago
Posterior probability of a prompt-created website: 99%.
Comment by the_arun 9 hours ago
Comment by kurnik 9 hours ago
Comment by milanm081 9 hours ago
Comment by esafak 9 hours ago
Comment by Steinmark 2 hours ago
Comment by milanm081 12 hours ago
Comment by jdw64 8 hours ago
Comment by threepts 10 hours ago