I’m spending months coding the old way
Posted by evakhoury 3 days ago
Comments
Comment by apricot 3 days ago
Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.
We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.
Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.
At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...
And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.
They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.
Comment by zrobotics 3 days ago
I've had other people look askanse at me, but on greenfield work I tend to start with pen and graph paper. I'm not even writing pseudocode, but diagramming a loose graph with potential functions or classes and arrows interconnecting them. Obviously this can be taken too far, full waterfall planning will be a different exercise in frustration.
I find spending a few hours planning out ahead of time before opening an editor saves me tons of time actually coding. I've never had a project even loosely resemble the paper diagram, but the exercise of thinking through the general structure ahead of time makes me way more productive when it comes time to start writing code. I've tried diagramming and scaffolding in my editor, but then I end up actually writing code instead of big picture diagramming. Writing it on paper where I know I'll have to retype everything anyway removes the distractions of what method to use or what to name a variable.
The few times I've vibe-coded something this was super helpful, since then I can give much more concrete and focused prompts.
Comment by jimbokun 2 days ago
Doing this exact same process interactively with other people, and a not to NOT ERASE or later taking a picture of the whiteboard with your phone.
Comment by bsaul 2 days ago
Comment by wrs 2 days ago
Comment by josh_s 2 days ago
Perfect for a distributed team to replace the DO NOT ERASE white boards of yore.
Comment by bsaul 1 day ago
Comment by jimbokun 2 days ago
Comment by BrandoElFollito 2 days ago
Comment by hackable_sand 2 days ago
Same with notes that you will never see again. Done in pen, on random pages.
That process is bulletproof, for me.
Comment by bdangubic 2 days ago
Comment by chrisweekly 2 days ago
Comment by spockz 3 days ago
All this to say that it is extremely useful to have the program and the problem space in your head and to be able to reason about it before hand. It makes it clearer what you expect and easier to catch when something unexpected happens.
Comment by econ 2 days ago
Then with each year grow more paranoid if there are no bugs or typos.
Comment by dehrmann 2 days ago
Comment by hackable_sand 2 days ago
That includes experimentation.
Comment by pipes 2 days ago
As a sort of an adjacent point, I worked through a book that is used on a course often called "from nand to Tetris". It is probably the best thing I've done, in terms of understanding how computers, assemblers and compilers work
Comment by shevy-java 2 days ago
I am not sure whether the statement is correct; I am not sure whether the statement is incorrect either. But I tested many editors and IDEs over the years.
IDEs can be useful, but they also hide abstractions. I noticed this with IntelliJ IDEA in particular; before I used it I was using my old, simple editor, and ruby as the glue for numerous actions. So when I want to compile something, I just do, say:
run FooBar.java
And this can do many things for me, including generating a binary via GraalVM and taking care of options. "run" is an alias or name to run.rb, which in turn handles running anything on my computer. In the IDE, I would have to add some config options and finding them is annoying; and often I can't do things I do via the commandline. So when I went to use the IDE, I felt limited and crippled in what I could do. My whole computer is actually an IDE already - not as convenient as a good GUI, of course, but I have all the options I want or need, and I can change and improve on each of them. Ruby acts as generic glue towards everything else on Linux here. It's perhaps not as sophisticated as a good IDE, but I can think in terms of what I want to do, without having to adjust to an IDE. This was also one reason I abandoned vim - I no longer wanted to have my brain adjust to vim. I am too used to adjust the language to how I think; in ruby this is easily possible. (In Java not so much, but one kind of has to combine ruby with a faster language too, be it C, C++, Go, Rust ... or Java. Ruby could also be replaced, e. g. with Python, so I feel that discussion is very similar; they are in a similar niche of usage too.)Comment by bccdee 2 days ago
Conveniences sometimes make things more complicated in the long run, and I worry that code agents (the ultimate convenience) will lead to a sort of ultimate carelessness that makes our jobs harder.
Comment by dijksterhuis 2 days ago
i was working in a place that had a real tech debt laden system. it was an absolute horror show. an offshore dev, the “manager” guy and i were sitting in a zoom call and i was ranting about how over complicated and horrific the codebase was, using one component as a specific example.
the offshore dev proceeded to use the JetBrains Ctrl + B keybind (jump to usages/definitions) to try and walk through how it all worked — “it’s really simple!” he said.
after a while i got frustrated, and interrupted him to point out that he’d had to navigate across something like 4 different files, multiple different levels of class inheritance and i don’t know how many different methods on those classes just to explain one component of a system used by maybe 5 people.
i used nano for a lot of that job. it forced me to be smarter by doing things simpler.
Comment by jimbokun 2 days ago
Comment by SoftTalker 2 days ago
Comment by drzaiusx11 3 days ago
Today I program 6502/7 asm for my Atari to help me unwind and it grounds me and gives me joy, while in my day job I'm easily 10 levels of abstractions higher.
Comment by ggerules 2 days ago
Comment by drzaiusx11 2 days ago
Comment by AlotOfReading 2 days ago
Comment by drzaiusx11 1 day ago
Comment by tikotus 3 days ago
But a few hours (or days) in, I forget what the problem was. A part of my brain wakes up. I start thinking about what I'm passing around, I start recognizing the types from the context and names...
It's just a different way of thinking.
I recognized the same feeling after vibe coding for too long and taking back the steering wheel. I decided I'd never let go again.
Comment by genxy 2 days ago
My best LLM written code is where I did a prototype of the overall structure of the program and fed that structure along with the spec and the goal. It is kind of the cognitive bitter lesson, the more you think the better the outcome. Always bet on thinking.
Comment by okeuro49 2 days ago
Refactoring is a nightmare, as as types don't exist, the compiler can't help you if you try to access a property that doesn't exist.
I think generally people have realised this, and there are attempts to retrofit types onto dynamically typed languages.
Comment by zahlman 2 days ago
Comment by hyperhello 2 days ago
Comment by zahlman 2 days ago
Comment by BoingBoomTschak 2 days ago
CL has a pretty anemic type system, but at least it does gradual without having to resort to this.
Comment by hyperhello 2 days ago
Comment by HappMacDonald 2 days ago
Comment by hyperhello 2 days ago
Comment by BoingBoomTschak 2 days ago
Same reason my views about GC evolved from "it's for people lacking rigour" to "that's true, but there's a second benefit: no interleaving of memory handling and business logic to hurt clarity".
Comment by wffurr 3 days ago
https://www.gnu.org/fun/jokes/ed-msg.html
My first job out of university I was taught how to use a line editor in IBM UniData. It was interesting getting used to writing code that way.
But it was an amazing day when I discovered that the "program table" was just a directory on the server I could mount over FTP and use Notepad++.
Comment by andersmurphy 2 days ago
If you're prepared to forgo some portability and pick an architecture assembly opens up a lot if options. Things like coroutines, automatic SIMD become easier to implement. It's also got amazing zero cost C FFI (and I'm only half joking). Linux kernel booting into a minimal STC Forth is a lot of fun.
Not to mention you can run your code on android without SDK or NDK over ADB (in the case of aarch64).
Comment by hiroboto 2 days ago
Comment by andersmurphy 2 days ago
Comment by cobbzilla 3 days ago
Comment by sagacity 3 days ago
Comment by assimpleaspossi 2 days ago
Comment by fuzztester 2 days ago
Comment by s1mplicissimus 3 days ago
Comment by TeMPOraL 2 days ago
Comment by cobbzilla 2 days ago
Comment by ggerules 2 days ago
Or....
Ctrl-Z
Comment by ggerules 2 days ago
One of the continuous battles I kept loosing when introducing an assembly language undergraduate course. Other higher up colleages and deans would say... too hard... nobody uses that anymore... and shut the course down. But I would always sneak it into other courses I taught, systems programming, computer languages, computer architecture. But I've always felt there was a hole in my student's understanding of computers.
I grew up in a time when assembly language was a part of the cariculum. It helped bridge the gap between higher level languages like C/C++ ...etc. Also why certain language features exist. Also how many language constructs work. Also more importantly, as pointed out by the two posters above, it gives you a way to think about the CPU one asm line at a time what is going on the CPU ecosystem. That is fantastic training!
Even though I kept loosing the assembly language course battles, I hope I planted enough seeds in students that they will take it up on their own at some point. Everyone should at least learn to program in one assembly language.
Comment by p2detar 2 days ago
As someone that used to write C and Assembly programs on a sheet of paper for university exams, I chuckled a bit. I finished university in post-soviet country twenty years ago or so and this was the norm. I used to hate it so much.
Comment by FrankRay78 2 days ago
Comment by fouc 1 day ago
Comment by neocron 2 days ago
Comment by YZF 2 days ago
When I built my first guitar I had very few tools so I used what I had since I'm cheap ;) Then I bought better tools and it made my life a lot easier. But I got some lessons from the experience. Mostly though it was a pain that's solved by better tooling.
Comment by deepsun 2 days ago
Spaces are sometimes mandatory sometimes not. Something I didn't even think might be confusing, for me it's like breathing.
Comment by Neywiny 2 days ago
Contrasting that to helping a college roommate with Arduino code he said he didn't understand what it was doing: he had 0 indentation. Braces everywhere. He didn't understand what it was doing not because it was complex logic (it was only maybe 30 simple lines) but because his flow control was visually incomprehensible. It's pretty hard to do that in Python.
But that's why I believe in polygloty. Best of multiple worlds.
Comment by ChrisMarshallNY 3 days ago
I am currently working in Swift, with an LLM, on a fairly good-sized app, in Xcode, for a device that probably has a minimum of 64 GB of storage, and 8 GB of RAM.
I don’t really miss the good ol’ days, to be honest. I’m having a blast.
Comment by flawn 3 days ago
Comment by philipnee 3 days ago
Comment by MattBearman 3 days ago
Comment by mchaver 2 days ago
Comment by ssgodderidge 3 days ago
Comment by apricot 3 days ago
Comment by sitzkrieg 3 days ago
scaring people away w x86 cruft right out the gate is no good for anyone :-)
Comment by MaxBarraclough 2 days ago
Comment by tremon 2 days ago
Comment by MaxBarraclough 2 days ago
Looks like I'm mistaken on terminology though. x86 includes the 16-bit, 32-bit, and 64-bit ISAs of that family, and doesn't refer specifically to the 32-bit generation.
Comment by juliendorra 2 days ago
Comment by sixtyj 3 days ago
Comment by senordevnyc 2 days ago
For example, I suspect more startups die from over-analysis than from acting too quickly and breaking things beyond repair.
That said, I think LLMs can be a mixed bag here. I find that they can really help my analysis phase, by suggesting architectures, finding places where future abstractions will leak, reminding me of how a complex project works, etc. I’ve found it invaluable to go back and forth in a planning phase with an agent before even deciding what exactly I want to build, or how.
And on the implementation side, they make code attempts very cheap, so I can try multiple things and just throw them away if I don’t like the result.
But that said, I do find that it requires discipline, because it’s very easy to get into a groove where I don’t do any of that, and instead just toss half-baked ideas over the wall and the agent figure out the details. And it will, and it’ll be pretty decent usually, but not as good as if I pair program with it fully.
Comment by roughly 2 days ago
My understanding is that was the actual point of "move fast and break things" - gain knowledge by trying stuff to help you make better decisions, even if you make a mistake and need to roll back or fix it. The art to this is figuring out how to contain the negative consequences of whatever you're testing, but by all means, experiment early to gather information.
I've stated it to mentees as "don't be afraid to start a fire as long as you know where the fire extinguishers are" - it's OK to fail in the service of learning so long as you fail in a contained way.
Comment by sixtyj 2 days ago
Comment by roughly 2 days ago
1. There's a right answer to every problem in school
2. If you got it wrong, that's bad, and you did bad.
The pattern I've seen from younger people these days is a learned helplessness, where there's no room for them to be creative in school, and any attempt to do so runs the risk of failing an assignment, getting a B, missing out on Harvard, and spending the rest of their lives poor in a ditch, or so they're told.
Comment by IshKebab 2 days ago
I'm pretty skeptical that using a line editor will have helped them learn. It probably helped them memorise their code but is that really learning? Dubious.
Comment by AstroBen 3 days ago
But yeah my hunch is "the old way" - although not sure we can even call it that - is likely still on par with an "agentic" workflow if you view it through a wider lens. You retain much better knowledge of the codebase. You improve your understanding over coding concepts (active recall is far stronger than passive recognition).
Comment by fouronnes3 3 days ago
Comment by estetlinus 3 days ago
Comment by bdangubic 3 days ago
Comment by lionkor 3 days ago
This is bogus. If you think LLMs write less buggy software, you haven't worked with seriously capable engineers. And now, of course, everyone can become such an engineer if they put in the effort to learn.
But why not just use the AI? Because you can still use the AI once you're seriously good.
Comment by roganartu 2 days ago
Perhaps because the jury is still out on whether one can become “seriously good” by using AI if they weren’t before.
Comment by lionkor 2 days ago
Comment by vaginaphobic 2 days ago
Comment by jb1991 3 days ago
Comment by bdangubic 2 days ago
Comment by saulpw 3 days ago
Comment by vips7L 2 days ago
Comment by cppluajs 3 days ago
Comment by ksymph 3 days ago
I hope if/when diffusion models get a little more traction down the line it'll put some new life into autocomplete(-adjacent) workflows. The virtually instantaneous responses of Inception's Mercury models [0] still feel a little like magic; all it's missing is the refinement and deep editor integration of Cursor.
On the subject of diffusion models, it's a shame there aren't any significant open-weight models out there, because it seems like such a perfect fit for local use.
Comment by ZihangZ 2 days ago
When I let an agent write too much of the structure, the code may work, but a week later every small change starts with "where did it put that?"
Comment by heyalexhsu 3 days ago
Comment by bluefirebrand 3 days ago
Comment by justapassenger 3 days ago
Comment by latexr 2 days ago
Comment by heyalexhsu 1 day ago
I have ADHD and just by brainstorming with AI helps me initiate.
Of course, you need to be the ultimate gatekeeper or else there will be quality issues. But isn't that the same when we write manual code? AI is just another tool in your toolkit.
Comment by orphea 2 days ago
Comment by oneeyedpigeon 3 days ago
Comment by heyalexhsu 1 day ago
I agree with the premise of the article but I just don't think going back to manual coding is the solution.
Here's my new attempt using puzzle as an analogy which I wrote yesterday:
Starting last year, I noticed coding was getting less fun. It’s like buying a puzzle set and finding out there’s an auto-complete button. Press it and the puzzle solves itself. Faster than me, better than me, prettier than me. It’s like playing a game with cheats on.
I don’t even have to touch the pieces anymore. I just tell the auto-solver what I want. Tell it I want a bird, it gives me a bird. A pirate ship? Here’s a pirate ship. At first I never imagined it could do a rocket, but with its help, that went from fantasy to reality fast.
Sometimes it doesn’t quite match what I wanted, but usually just telling it what’s wrong fixes things. The whole process is so fast that, if nothing’s broken, I don’t even bother looking at how it actually solved it. That would just waste time.
But coding felt less fun with this new assist mode.
The fun of puzzle-solving is gone. That feeling of trekking through the hard parts and finally reaching the summit is gone. Now it’s like taking a cable car up.
Before, I had to think alone for a long time, try things, experiment, until I finally cracked the problem. Now with the assist mode, it’s like doing college homework where the teacher already has the answer key. I just ask and I get a standard answer.
Coding went from craft to management. “I” went from a craftsman with standards to a foreman watching workers do the job. It’s just not the same. And “foreman” sounds kind of weak.
Comment by AstroBen 3 days ago
Comment by FrankRay78 2 days ago
Comment by otabdeveloper4 3 days ago
Yes, AI unlocks coding for people who fail FizzBuzz. This isn't really relevant to making software though.
Comment by resonancel 3 days ago
Comment by armchairhacker 2 days ago
I usually code faster with good (next-edit) autocomplete then writing a prompt and waiting for the agent.
Comment by theshrike79 1 day ago
This is like saying "EV charging is soooo slooooow" and thinking you need to stand next to the car holding the nozzle in the charging port like with a petrol car.
Of course you go do something else unless it's a literal 30 second operation.
Comment by HDThoreaun 3 days ago
Comment by 59nadir 2 days ago
I think most people "moved on" because they both thought the agent workflow is cooler and were told by other people that it works. The latter was false for quite some time, and is only correct now insofar that you can probably get something that does what you asked for, but executed exceedingly poorly no matter how much SpecLang you layer on top of the prompting problem.
Comment by wavemode 3 days ago
> Everyone moved on
> it is not a useful interface
You've made three claims in your brief comment and all appear to be false. Elaborate what you mean by any of this?
Comment by fg137 2 days ago
In some codebases, autocomplete is the most accurate and efficient way to get things done, because "agentic" workflows only produce unmaintainable mess there.
I know that because there are several times where I completely removed generated code and instead coded by hand.
Comment by lkirkwood 3 days ago
I just wish I knew of a good Emacs AI auto complete solution.
Comment by allthetime 3 days ago
Comment by temporallobe 3 days ago
Comment by andsoitis 3 days ago
For me it was GW-BASIC and no editor as we know them today.
That was instant gratification, rapid development, no silly layers. It was pretty pure. It is what hooked me.
In a sense, agentic coding, has brought back the excitement to building software for me because I don’t have to wrangle all the crazy enterprise or other modern development considerations directly. There’s a closer connection between thought and result, which is what was the magic that captured my imagination.
Comment by abcde666777 3 days ago
But when it comes to the final act I find myself unwilling to let an LLM write the actual code - I still do it myself.
Perhaps because my main project at the moment is a game I've been working on for four years, so the codebase is sizable, non-trivial, and all written by me. My strong sense even since coding LLMs showed up has been that continuing to write the code is important for keeping it coherent and manageable as a whole, including my mental model of it.
And also: for keeping myself happy working on it. The enjoyment would be gone if I leaned that far into LLMs.
Comment by bschwindHN 3 days ago
Despite what some might say, there isn't a big moat between those who use LLMs for programming and those who don't. So if I ever truly need to use LLMs to survive, I'll just have to start paying for a subscription.
In the meantime, I'll be keeping my own skills sharp and see how that turns out in a few years. I'm afraid software quality is going to take a nosedive in the near future, it was already on a downward trend.
Comment by phaser 3 days ago
Then, when credits run out. It’s show time! The code is neatly organized, abstractions make sense, comments are helpful so I have a solid ground to do some good old organic human coding. I make sure that when i’m approaching limits I’m asking the AI to set the stage.
I used to get frustrated when credits ran out because the AI was making something I would need to study to comprehend. Now I’m eager to the next “brain time hand-out”
It sounds weird but it’s a form of teamwork. I have the means to pay for a larger plan but i’d rather keep my brain active.
Comment by brianush1 3 days ago
That's an interesting thing to include. I agree with this point in principle, but I've found that Claude, at least, duplicates logic FAR too often and needs nudging in the other direction.
Comment by deaux 3 days ago
You hit on a very important point here. The linked AGENTS.md is a bad idea for general purpose use because the things it's meant to tackle, including an inherent bias towards or against DRY, is one of the big differences between model families. GPT 5.4 Codex has a very different "coding personality" from Claude Opus.
It's a product of whatever model it was tested on.
Comment by neonstatic 3 days ago
I can't do it. If I let an LLM write code for me, that code is untouchable. I see it as a black box, that I will categorically refuse to open. If it works, I use it, but don't trust it. If it breaks, I get frustrated. The only way that works for me is me behind the driving wheel at all times and an LLM as an assistant that answers my questions. We either brainstorm something or it helps me express things I know in languages syntax. Somehow that step has always been a bit of a burden for me - I understood the concepts well, but expressing them in syntax was a bit of a difficulty.
Comment by stringfood 3 days ago
Comment by hgomersall 3 days ago
Comment by LeCompteSftware 2 days ago
People should have read to the end of "Building a C compiler with a team of parallel Claudes"[1]:
The resulting compiler has nearly reached the limits of Opus [4.6]’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
"tried (hard!)" is very ominous. I wonder how Mythos would fare. Presumably it would get further, maybe much further. But I strongly doubt the "frequently broke existing functionality" problem was solved. Eventually humans have to understand the most difficult parts of the code. Good luck with that![1] https://www.anthropic.com/engineering/building-c-compiler
Comment by neonstatic 3 days ago
Comment by Moonye666 3 days ago
Comment by Moonye666 3 days ago
Comment by flawn 3 days ago
Comment by ironman1478 2 days ago
I appreciate that the author understands why doing everything "the old way" is good. AI is a tool, it can't be a replacement for how you think and it can't be a replacement for the actual work.
I wish more people had a desire for the inner workings of things because it makes you better at actually using tools. Implementing compilers, databases, OSes, control systems, etc. is like practicing swimming. Yeah, you might not ever swim again but when you need to the muscle memory will be there when you need to get out of the ocean (I know this is a strained metaphor).
Knowing more can only be a boon to using LLMs for coding and it's really a general problem in ML. I work in a science field as hw / sw engineer and I've seen so many pure data science people say they can replace all our work with a model, flail for 2 years and then their whole org gets canned. If they just read a textbook or collaborated (which they never do, no matter how polite you are), they'd have been able to leverage their data science skills to build something great and instead they just toil away never making it past step 0.
Comment by sph 3 days ago
Comment by mchaver 2 days ago
Comment by andersmurphy 2 days ago
The shark is being jumped.
Comment by Uptrenda 2 days ago
Comment by zombot 2 days ago
Comment by pydry 1 day ago
When Jensen Huang says you need to spend $500k on tokens per developer per year he knows it will be perceived as bullshit, but by setting such a high he's subtly making spending $0 seem abnormal and irrational.
This article does the same thing.
It's the same reason most companies have an ultra deluxe $150 /month plan nobody buys.
Comment by tikotus 3 days ago
Comment by Unsponsoredio 1 day ago
Comment by Remdo 2 days ago
Comment by orphea 2 days ago
Comment by 0123456789ABCDE 2 days ago
Comment by eventualcomp 2 days ago
Anyway, I got value out of it, comments dont have to increase net factual information to be meaningful, because we are all capable of reflection.
Comment by mindcrime 3 days ago
Comment by scarface_74 3 days ago
Someone thought I was naive when I said my vibe coded internal web admin site met the security requirements without looking at a line of code.
I knew that because the requirements were that anyone who had access to the site could do anything on the site and the site was secured with Amazon Cognito credentials and the Lambda that served it had a least privileged role attached.
If either of those invariants were broken, Claude has found a major AWS vulnerability.
Comment by Terr_ 3 days ago
Suppose that in normal use a user can visit a certain URL which triggers a dangerous effect. An attacker could trick the user into performing the action by presenting a link to them titled "click here for free stuff."
There are various ways to protect against that (e.g. CORS, not using GET methods) but backend cloud credential management does not give it to you for free.
Comment by scarface_74 3 days ago
The lambda itself only has limited permissions to the backend. The user can’t do anything if the lambda only has permission to one database and certain rights to those tables, one S3 bucket, etc.
Heck with Postgres on AWS you can even restrict a Cognito user to only have access to rows based on the logged in user.
And the database user it’s using only has the minimum access to just do certain permissions.
Comment by mindcrime 3 days ago
Do they? Did you write them? If not, how do you know they confirm the desired behavior? If your tests are AI generated (and not human reviewed) then even if you're doing spec-driven development and provide a comprehensive spec, how can you be sure the tests actually test the desired behavior?
Now if you're either writing or reviewing the tests, then sure.
Also, for what it's worth, when I talk about my "responsibility" I'm speaking more from a self-imposed sense of... um, almost a moral responsibility I feel, not something involving a 3rd party like a customer or employer.
Comment by scarface_74 2 days ago
There is no “morality” when it comes to my job. Outside of my feeling morally obligated to give my employer the benefit of all my accumulated skills for 40-45 hours a week in exchange for the money (and in a previous life RSUs) in my account.
I feel accountable to my coworkers and customers to deal with them fairly and honestly.
What other moral obligation should I have besides my employer, coworkers and customers?
Comment by mindcrime 13 hours ago
Cool. Then what you're doing seems totally reasonable to me, for what that's worth. My skepticism would be directed towards people who have AI write the code and the tests, and then don't do any further review. That, to me, is a sure path to "AI slop". But if you're specifying the desired behavior and reviewing the tests, then I don't see any problem with it.
> What other moral obligation should I have besides my employer, coworkers and customers?
No idea. That's up to you. Note that my comment above was intended to be descriptive, not prescriptive. Like I said, I'm talking about something that's purely a self-imposed thing. If you don't feel that same thing, that's totally fine.
Comment by sgarland 3 days ago
Comment by imtringued 2 days ago
When I use SAML, I still have to check that the user has some sort of attribute that indicates that access was granted to the application. If this access rule is defined outside the application, then why bring up Claude? If it isn't then Claude is responsible for implementing the access rule, which means the comment is 100% wrong.
Comment by guzfip 2 days ago
Comment by scarface_74 2 days ago
https://www.reddit.com/user/Scarface_74/
And I have no idea what TREZOR is…
Comment by Refreeze5224 3 days ago
Comment by losvedir 3 days ago
Comment by scarface_74 3 days ago
Comment by dinkumthinkum 3 days ago
Comment by bitwize 3 days ago
I still keep hoping there'll be a glut of demand for traditional software engineers once the bibbi in the babka goes boom in production systems in a big way:
https://m.youtube.com/watch?v=J1W1CHhxDSk
But agentic workflows are so good now—and bound to get better with things like Claude Mythos—that programming without LLMs looks more and more cooked as a professional technique (rather than a curiosity or exercise) with each passing day. Human software engineers may well end up out of the loop completely except for the endpoints in a few years.
Comment by elAhmo 2 days ago
Comment by stingraycharles 2 days ago
I’m very concerned about how the next generation of software engineers will pick up deep knowledge about this stuff, if at all.
Comment by andersmurphy 2 days ago
I also expect a bunch of open source/open core projects to go closed source in the next few months.
Comment by leg100 2 days ago
Comment by armchairhacker 2 days ago
Comment by birdfood 3 days ago
Comment by ACS_Solver 3 days ago
Once upon a time we wrote code in assembly language. Then we moved to C or other compiled languages. Assembly programming remained a very useful but niche skill. You compile your code and trust the compiler. You can examine the compiler output and that is at times necessary, but that's not something most developers know how to do.
We may be looking at something similar. Most development work moving to the LLM abstraction level, with the skills being writing good prompts, managing the context window, agents, memories and so on. Some developers will be able to examine LLM generated code and spot problems there, but most will not have that skill.
I'm not sure how to feel about it. Since ChatGPT showed up and until a couple months ago, I was firmly skeptical of LLM programming. We had new models every few weeks and I felt like each new model is just a different twist on the same low quality slop output. But recently the models seem to have crossed some threshold where their capabilities really improved and I have now used Claude - still using it sparingly - to implement features in much less time than I'd need myself or to locate a bug based on just log output. I don't yet buy the "coding is solved" hype but we're at least looking at the biggest change to programming since the adoption of high-level programming languages.
Comment by octagons 2 days ago
I’ve spent a lifetime teaching myself programming, computers, and engineering. I have no formal education in these disciplines and find that I excel due to the self-taught nature of my background.
I take a very metered approach to AI and use it for autocomplete while still scrutinizing every token (not the AI kind) as well as an augment to my self-pedagogy. It’s great to be able to “query” or get a summary from a set of technical documents on demand.
However, I don’t understand the desire to remove oneself from the process with AI. I simply don’t do anything that won’t teach me something new or improve my existing skills.
There’s more to engineering than simply programming. Both the engineer and the intended user base must also understand the system. The value lost is greater than the sum of all the parts when an LLM produces most or all of the code.
Comment by pretzel5297 2 days ago
Not trying to be rude but you either must not be a professional software engineer or your skill level isn't that high yet. You simply cannot always do things that teach you new skills or improve existing ones. In any sufficiently complex project, even the most novel ones, you'll do things you've done many times before.
Comment by t43562 2 days ago
My last job started with "here's a book about go programming." 2 years later I was learning FastAPI. Now I'm programming in C again but I have spent most of my time learning about git actions and writing SCCS->git conversion software. I've never used SCCS before.
Comment by skydhash 2 days ago
Comment by octagons 2 days ago
Comment by fouronnes3 3 days ago
Comment by culi 3 days ago
Comment by gregsadetsky 3 days ago
Comment by brianjlogan 3 days ago
I am seeing non technical people getting involved building apps with Claude. After the Openclaw and other Agentic obsession trends I just don't see it pragmatic to continue down the road of AI obsession.
In most other aspects of life my skills were valuated because of my ability to care about details under the hood and the ability to get my hands dirty on new problems.
Curious to see how the market adapts and how people find ways to communicate this ability for nuance.
Comment by derangedHorse 3 days ago
I saw this quote when looking at the Recurse Center website. How does one usually go about something like this if they work full time? Does this mainly target those who are just entering the industry or between jobs?
I know the article is mostly about what the author built at the coding retreat, but now he has me interested in trying to attend one!
Comment by nicholasjbs 3 days ago
Most folks do RC between jobs, either because they quit their job specifically to do RC or because they lost their job and then decide to apply. Other common ways are as part of a formal sabbatical (returning either to an industry job or to academia), as part of garden leave, or while on summer break (for college and grad students). We also get a fair number of freelancers/independent contractors (who stop doing their normal work during their batches), as well as some retirees.
Some folks use RC as a way to enter the industry (both new grads and folks switching careers), though the majority of people who attend have already worked professionally as programmers.
We've had people aged 12 to early 70s attend, though most Recursers are in their 20s, 30s, and 40s.
Comment by wonger_ 3 days ago
Unless you can swing a six week sabbatical and return to your current job
Comment by beej71 3 days ago
Comment by bendmorris 2 days ago
Comment by beej71 2 days ago
Comment by linzhangrun 3 days ago
Comment by andai 2 days ago
> One solution to this constant companion problem: Spend more time with your phone out of easy reach. If it’s not nearby, it won’t be as likely to trigger your motivational neurons, helping clear your brain to focus on other activities with less distraction.
Reminds me of this study: "The mere presence of a smartphone reduces basal attentional performance"
The effect persisted even when the phone was switched off. It only went away when the phone was moved to a different part of the building.
Comment by tossandthrow 3 days ago
Comment by kolleykibber 2 days ago
Comment by Unsponsoredio 1 day ago
fine, but the gym analogy breaks down somewhere. in a gym, the person who actually lifts heavier gets noticed. in software, the person with the right bio and the right network gets noticed, regardless of whether they've ever lifted anything real.
you can spend three years learning compilers properly and have a handful of readers. someone else ships a wrapper on a saturday and lands a pmarca quote tweet by monday.
coding the old way is good for you. i'm not convinced it's what gets you noticed. the strain was never really what got rewarded in the first place.
Comment by LittleBox 1 day ago
> coding the old way is good for you. i'm not convinced it's what gets you noticed.
You won’t go completely unnoticed if you’re good at your job but you can only be noticed from your deliverables, which is a slow process. You can buff up your presence by talking about it a lot, yes, but you won’t get 0 attention for hard work.
Comment by Unsponsoredio 1 day ago
Comment by fallingfrog 3 days ago
Comment by wulfstan 3 days ago
My ex business partner said “AI won’t take your job, but the person who uses it will”. I don’t agree. The person who isn’t reliant on AI is the one you should really be afraid of.
Comment by bigfishrunning 3 days ago
Comment by lrvick 3 days ago
What scares the shit out of me are all these new CS grads that admit they have never coded anything more complex than basic class assignments by hand, and just let LLMs push straight to main for everything and they get hired as senior engineers.
It is like hiring an army of accountants that have never done math on paper and exclusively let turbotax do all the work.
If you have never written and maintained a complex project by hand, you should not be allowed to be involved in the development of production bound code.
But also, I feel this way about the industry long before LLMs. If you are not confident enough to run Linux on the computer in front of you, no senior sysadmin will hire you to go near their production systems.
Job one of everyone I mentor is to build Linux from scratch, and if you want an LLM build all the tools to run one locally for yourself. You will be way more capable and employable if you do not skip straight to using magic you do not understand.
Comment by adamddev1 3 days ago
It's not though. It's fundamentally different because TurboTax will still work with clear deterministic algorithms. We need to see that the jump to AI is not a jump from hand written math to calculators. It's a jump from understanding how the math works to another world of depending on magic machines that spit out numbers that sort of work 90% of the time.
Comment by bluefirebrand 3 days ago
They probably wouldn't think that the calculator makes them faster either
Comment by layer8 3 days ago
Comment by thesz 3 days ago
If we assume that there are 50 weeks per year, this gives us about 400-500 lines of code per week. Even at long average 65 chars per line, it goes not higher than 33K bytes per week. Your comment is about 1250 bytes long, if you write four such comments per day whole week, you would exceed that 33K bytes limit.
I find this amusing.
Comment by octagons 2 days ago
Comment by thesz 12 hours ago
Comment by raincole 3 days ago
In what way? You're either very young or very old, right? Voice-to-text has been a common way to input text online since iPhone. Someone commented on HN != they typed that many words with their fingers.
Comment by thesz 2 days ago
If the person I replied do use voice-to-text, their mention of carpal syndrome is moot and this is amusing. If they do not use voice-to-text, it is still amusing in the sense of my previous comment.
Comment by raincole 2 days ago
Nah, impossible. They must be making up their carpal syndrome because nothing is ever real.
Comment by slopinthebag 3 days ago
Comment by thesz 3 days ago
My software engineering experience longs almost 37 years now (December will be anniversary), six-to-seven years more than Earth's human population median age. I had two burnouts through that time, but no carpal tunnel syndrome symptoms at all. When I code, I prefer to factor subproblems out, it reduces typing and support costs.
Comment by lrvick 3 days ago
That said, I am also actively experimenting with VTT solutions which are getting quite good.
Comment by slopinthebag 3 days ago
Comment by sho_hn 3 days ago
So only the old hands allowed from now on, or how are we going to provide these learning opportunities at scale for new developers?
Serious question.
Comment by hallway_monitor 3 days ago
Comment by SlinkyOnStairs 3 days ago
Employers were already refusing to hire juniors, even when 0.5-1 years' salary for a junior would be cheaper than spending the same on hiring a senior.
They'll never accept intentionally "slower" development for the greater good.
Comment by jacobsenscott 3 days ago
That comes post Chernobyl.
Comment by 8note 3 days ago
my last summer intern did everything the manual way, except for a chunk where I wanted him to get something done fast without having to learn all the underlying chunks
Comment by lrvick 3 days ago
Always happy to mentor people at stagex and hashbang (orgs I founded).
Also being a maintainer of an influential open source project goes on a resume, and helps you get seen in a crowded market while boosting your skills and making the world better. Win/win all around.
Comment by sho_hn 3 days ago
Comment by rafaelmn 3 days ago
I don't think SWE is a promising career to get started in today.
Comment by mwwaters 3 days ago
But pro-AI posts never seem to pin themselves down on whether code checked in will be read and understood by a human. Perhaps a lot of engineers work in “vibe-codeable” domains, but a huge amount of domains deal with money, health, financial reporting, etc. Then there are domains those domains use as infrastructure (OS, cloud, databases, networking, etc.)
Even where it is non-critical, such as a social media site, whether that site runs and serves ads (and bills for them correctly) is critical for that company.
Comment by 8note 3 days ago
you dont notice it when you are only looking at your own harness results, but the llm bakes so very much of your own skills and opinions into what it does.
LLMs still regurgitate a ton.
Comment by rafaelmn 2 days ago
And insufficient talent because of retirement becomes an issue in like 30 years even with current developer demand, and I expect that demand to go down significantly over time, even with current level of capabilities.
Comment by lrvick 3 days ago
We have a completely broken internet with almost nothing using memory encryption, deterministic builds, full source bootstrapping, secure enclaves, end to end encryption, remote attestation, hardware security auth, or proper code review.
Decades of human cognitive work to be done here even with LLM help because the LLMs were trained to keep doing things the old way unless we direct them to do otherwise from our own base of experience on cutting edge security research no models are trained on sufficiently.
Comment by weakfish 2 days ago
Comment by jazz9k 3 days ago
I suppose it's like bandwidth cost in the 90s. At some point, it becomes a commodity.
Comment by teruakohatu 3 days ago
That is exactly been the situation for years. Once graduated accountants are not doing maths. They are using software (Exel, Xero etc.). They do need to know some basic formulas eg. NPV.
What they need to know is the law, current business practices etc.
Comment by einpoklum 3 days ago
If that's true, then you likely used to produce slop for code. :-(
> I did things the old way for 25 years and my carpal tunnels are wearing out.
You wrote so much code as to wear out your carpal tunner? Are you sure it isn't the documentation and the online chatter with your peers? :-(
... anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.
> Job one of everyone I mentor is to build Linux from scratch
"from scratch" can mean any number of things.
Comment by lrvick 3 days ago
Local models are quite good now, and can jump right in to projects I coded by hand, and add new features to them in my voice and style exactly the way I would have, and with more tests than I probably would have had time to write by hand.
Three months ago I thought this was not possible, but local models are getting shockingly good now. Even the best rust programmers I know look at output now and go "well, shit, that is how I would have written it too"
That is a hard thing to admit, but at some point one must accept reality.
> anyway, I know it's corny to say, but - you should have, and shoudl now, improve the ergonomics of your setup. Play with things like the depth of your keyboard on your desk, the height of the chair and the desk, with/without chair handrests, keyboard angle, etc.
I already type with colemak on a split keyboard with each half separated and tented 45 degrees on a saddle stool, with sit/stand desk I alternate. I have read all the research and applied all of it that I can. Without having done all that I probably would have had to change careers.
> "from scratch" can mean any number of things.
As far as I know I was the first person alive to deterministically build linux from 180 bytes of machine code, up to tinycc, to gcc, to a complete llvm native linux distribution.
When I say from scratch, I mean from scratch. Also, all of this before AI without any help from AI, but I sure do appreciate it to help with package maintenance and debugging while I am sleeping.
Comment by einpoklum 2 days ago
If you mean this kind of from scratch:
https://www.bootstrappable.org/
then - that's both impressive and important, and I salute you!
Comment by lrvick 2 days ago
I founded https://stagex.tools
Comment by mistrial9 2 days ago
sadly the discussion steeply diverges IMHO -- rote work by defined spec in highly defined API environment is absolutely a thing, while abstracted and/or original ideas on a platform of choice with substantial new or custom work implementation, is different.
Comment by wkjagt 2 days ago
Comment by assimpleaspossi 2 days ago
The old way?! So not using AI is already being called "the old way"?!!
That statement sends alarm bells off about writing on the internet and trust to be put into it as if I'm the first one to notice it.
Comment by bertil 2 days ago
Comment by ofjcihen 2 days ago
Recently I’ve been trying to combat this by learning things “deeper” IE. yes I can secure and respond to container based threats but how do containers actually work deep down?
So far I think it’s working well and as an odd plus it’s actually helping me use AI more efficiently when I need to.
Comment by fouronnes3 3 days ago
This would probably require cooperation during model training, but now that I think of it, is there adversarial research on LLM? Can you design text data specifically to mess with LLM training? Like what is the 1MB of text data that if I insert it into the training set harms LLM training performance the most?
Comment by dougiejones 3 days ago
Comment by andsoitis 3 days ago
Maybe there’s another way…
Comment by inerte 3 days ago
Maybe text that costs a LOT of tokens. Very, very verbose. I think if there are rules and on the internet, LLMs can eventually figure it out, so you have to make it expensive.
Another way would be to go offline. Never write it down, only talk about it at least 50 meters away from your phone. Transmitted through memory and whisper.
Comment by mswphd 3 days ago
Comment by imtringued 2 days ago
Comment by SoftTalker 2 days ago
Comment by ButlerianJihad 3 days ago
Comment by yodsanklai 2 days ago
the old way which is about one year ago?
Comment by derwiki 2 days ago
Comment by 6031769 2 days ago
Comment by teaearlgraycold 2 days ago
Comment by linkregister 3 days ago
I remember writing BASIC on the Apple II back when it wasn't retro to do so!
Comment by ludr 3 days ago
Comment by serbrech 3 days ago
Comment by whateveracct 2 days ago
Typing and thinking in English is demonstrably slower than in code/the abstract (Haskell for me.)
And no, I didn't write English plans before AI. Or have a stream of English thought in my head. Or even pronounce code as I read and wrote it. That's low-skill stuff.
Comment by booleandilemma 2 days ago
Comment by whateveracct 2 days ago
I tried LLMs because of AI fomo by my CEO. "Opus 4.whatever is a stepwise improvement - I am convinced all coders who don't go all-in on AI will be obsolete soon." Multiple times I tried. Every time Claude creates crap and I have to spend a bunch of time correcting it in a loop. Or basically scripting it. And it's like..I never spent this much time thinking or actively working before.
So I'm back to being natty and I am delivering more and have more time during the workday to spend with my wife and kid and video games etc.
Comment by myst 2 days ago
Comment by moomin 3 days ago
> 15 years of Clojure experience
My God I’m old.
Comment by mattdecker100 3 days ago
Comment by CivBase 1 day ago
1. It increases the chances of any bugs being found and resolved.
2. It encourages the author to be more careful with their code to avoid long reviews with a lot of findings.
3. It ensures at least two people - the author and approverd - have familiarity with the code.
4. It spreads responsibility for the code across at least two people - the author and approvers.
It's clear this article's author does not review their own code. I sure hope that code is not used for anything important.
Comment by epx 3 days ago
Comment by bschwindHN 3 days ago
Comment by fsckboy 3 days ago
Comment by LeCompteSftware 3 days ago
> There were 2 or 3 bugs that stumped me, and after 20 min or so of debugging I asked Claude for some advice. But most of the debugging was by hand!
Twenty whole minutes. Us old-timers (I am 39) are chortling.
I am not trying to knock the author specifically. But he was doing this for education, not for work. He should have spent more like 6 hours before desperately reaching for the LLM. I imagine after 1 hour he would have figured it out on his own.
Comment by Gigachad 3 days ago
Though a lot of the time this is more an inefficiency of the documentation and Google rather than something only LLMs could do.
Comment by skydhash 2 days ago
Comment by nyarlathotep_ 3 days ago
Comment by j1elo 3 days ago
This can be set as far as 1h of being stuck. Can also be 5 minutes. But by default it is 30 seconds.
My inner kid was screaming "that's cheating!" :-D but on second thought it is a very cool feature for us busy adults, however it's sad the extremes that gamedevs have to go in order to appease the short-term mindless consumers of today's tik-toks.
But more seriously, where's the joy of generating long-standing memories of being stuck for a while on a puzzle that will make you remember that scene for 30 years? An iconic experience that separates this genre from just being an animated movie with more steps.
I couldn't imagine "Monkey Island II but every 30 seconds we push you forward". Gimme that monkey wrench.
TFA and this comment just made me have this thought about today's pace of consumption, work, and even gaming.
Comment by alemwjsl 3 days ago
* Ask someone to come over and look
* Come back the next day, work on something else
* Add comment # KNOWN-ISSUE: ...., and move on and forget about it.
But year spent days on a bug at work before ha ha!
Comment by moregrist 3 days ago
This is a tried and true way of working on puzzles and other hard problems.
I generally have 2-4 important things in flight, so I find myself doing this a lot when I get stuck.
Comment by ignoramous 3 days ago
Just a note that, for chronic procrastinators, having 2 to 4 important things going on is a trigger & they'd rather not complete anything.
I wonder, for such folks, if SoTA LLMs help with procrastination?
Comment by justonceokay 3 days ago
Comment by calvinmorrison 3 days ago
Comment by usernametaken29 3 days ago
Comment by BodyCulture 3 days ago
Comment by usernametaken29 2 days ago
Comment by Tanoc 3 days ago
If anyone remembers middleschool mathematics this is the coding example of the teacher making you write out the equations in their longest form instead of shortcutting. It's done this way because it shows you your exact train of thought and where you went wrong. That sticks in your head. You understand the problem by understanding yourself. Giving up after twenty minutes instead of stopping, clearing your active cognitive load, and then coming back erases your ability to understand that train of thought.
For a comparison it's like being in first person view in a videogame, and the only thing you have is the ability to look behind you, versus being able to bring up a map that has an overhead view. In first person you're likely to lose where exactly you went to get where you are, while with the overhead view map you can orient your traveled route according to landmarks and distance.
Comment by sho_hn 3 days ago
Comment by Trasmatta 3 days ago
Comment by encrux 3 days ago
Having a tool that instantly searches through the first 50 pages of google and comes up with a reasonable solution is just speeding up what I would have done manually anyways.
Would I have learned more about (and around) the system I‘m building? Absolutely. I just prefer making my system work over anything else, so I don’t mind losing that.
Comment by Trasmatta 3 days ago
Comment by LeCompteSftware 3 days ago
Just so many confusing things go wrong in real-world software, and it is asinine to think that Mythos finding a ton of convoluted memory errors in legacy native code means we've solved debugging. People should pay more attention to the conclusion of "Claude builds a C compiler" - eventually it wasn't able to make further progress, the code was too convoluted and the AI wasn't smart enough. What if that happens at your company in 2027, and all the devs are too atrophied to solve the problem themselves?
I don't think we're "doomed" like some anti-AI folks. But I think a lot of companies - potentially even Anthropic! - are going to collapse very quickly under LLM-assisted technical debt.
Comment by glhaynes 3 days ago
Comment by chasd00 3 days ago
Comment by jjice 3 days ago
The euphoria I felt after fixing bugs that I stayed up late working on is like nothing else.
Comment by mapontosevenths 3 days ago
Comment by voidfunc 3 days ago
If you cant fix the bug just slop some code over it so its more hidden.
This is all gonna be fascinating in 5-10 years.
Comment by seanw444 3 days ago
Comment by lstodd 2 days ago
Many minor happened along, like crypto+nft stuff or renaming master branches and adding codes of conduct. I think it's just human nature. Fascinating nevertheless.
Comment by SlinkyOnStairs 3 days ago
But for juniors, it's invaluable experience. And as a field we're already seeing problems resulting from the new generations of juniors being taught with modern web development, whose complexity is very obstructing of debugging.
Comment by badc0ffee 3 days ago
I worked on a project that depended on an open source but deprecated/unmaintained Linux kernel module that we used for customers running RHEL[1]. There were a number of serious bugs causing panics that we encountered, but only for certain customers with high VFS workloads. I spent days to a week+ on each one, reading kernel code, writing userland utilities to repro the problem, and finally committing fixes to the module. I was the only one on the team up to the task.
We couldn't tell the customers to upgrade, we couldn't write an alternative module in a reasonable timeframe, and they paid us a lot of money, so I did what I had to do.
I'm sure there are lots of other examples like this out there.
[1] Known for its use of ancient kernels with 10000 patches hand-picked by Red Hat. At least at the time (5-10 years ago).
Comment by t43562 2 days ago
Comment by badc0ffee 2 days ago
Comment by z500 3 days ago
Comment by t43562 2 days ago
On the other hand, while I notice people not being impressed, they are careful to shift difficult things off onto others if at all possible.
Comment by dinkumthinkum 3 days ago
Comment by raw_anon_1111 3 days ago
Comment by oasisaimlessly 2 days ago
Comment by raw_anon_1111 2 days ago
There is a direct easy to measure line about the revenue that anyone below me makes the company. My revenue per hour isn’t as exact since I support pre-sales and follow on work.
Comment by echelon 3 days ago
The time wasted thinking our craft matters more than solving real world problems?
The amount of ceremony we're giving bugs here is insane.
Paraphrasing some of y'all,
> "I don't have to spend a day stepping through with a debugger hoping to repro"
THAT IS NOT A PROBLEM!
We're turning sand into magic, making the universe come alive. It's as if we just got electricity and the internet and some of us are still reminiscing about whale blubber smells and chemical extraction of kerosene.
The job is to deliver value. Not miss how hard it used to be and how much time we wasted finding obscure cache invalidation bugs.
Only algorithms and data structures are pure. Your business logic does not deserve the same reverence. It will not live forever - it's ephemeral, to solve a problem for now. In a hundred years, we'll have all new code. So stop worrying and embrace the tools and the speed up.
Comment by dinkumthinkum 3 days ago
Comment by Trasmatta 3 days ago
This is both a strawman and a false dichotomy.
Comment by echelon 3 days ago
Too many of our engineering conversations are dominated by veneration of the old. Let me be hyperbolic so that I can interrupt your train of thought and say this:
We're starting to live in the future.
Let go of your old assumptions. Maybe they still matter, but it's also likely some of them will change.
The old ways of doing things should be put under scrutiny.
In ten years we might be writing in new languages that are better suited for LLMs to manipulate. Frameworks and libraries and languages we use today might get tossed out the door.
All energy devoted to the old way of doing things is perhaps malinvested into a temporary state of affairs. Don't over-index on that.
Comment by JuniperMesos 3 days ago
Comment by demorro 3 days ago
Comment by YesBox 3 days ago
So, the short of it is that this is a great insightful comment that I can back up with my own experience in making a game from scratch over the last 4+ years.
Comment by thrance 3 days ago
Comment by SoftTalker 2 days ago
Comment by Jtarii 3 days ago
If you want to solve the problem quickly then just use the resources you have, if you want to become someone who can solve problems quickly then you need to spend hundreds of hours banging your head against a wall.
Comment by dinkumthinkum 3 days ago
Comment by bsder 3 days ago
2) There are different levels of debugging. Are your eyes going to glaze over searching volumes of logs for the needle in a haystack with awk/grep/find? Fire up the LLM immediately; don't wait at all. Do the fixes seem to just be bouncing the bugs around your codebase? There is probably a conceptual fault and you should be thinking and talking to other people rather than an AI.
3) Debugging requires you to do a brain inload of a model of what you are trying to fix and then correct that model gradually with experiments until you isolate the bug. That takes time, discipline and practice. If you never practice, you won't be able to fix the problem when the LLM can't.
4) The LLM will often give you a very, very suboptimal solution when a really good one is right around the corner. However, you have to have the technical knowledge to identify that what the LLM handed you was suboptimal AND know the right magic technical words to push it down the right path. "Bad AI. No biscuit." on every response is NOT enough to make an LLM correct itself properly; it will always try to "correct" itself even if it makes things worse.
Comment by agdexai 2 days ago
For haystack-style debugging (searching logs, grepping stack traces), a fast cheap model with large context (Gemini Flash, Claude Haiku) is more cost-effective than a frontier model. For the conceptual fault category you mention — where you actually need to reason about system design — that's when it might be worth paying for o3/Claude Opus class models.
The friction is that most people default to whatever chatbot they have open, rather than routing to the right tool. The agent/LLM tooling space has gotten good enough that this routing is automatable, but most devs haven't set it up yet.
Comment by bhelkey 3 days ago
Comment by noosphr 3 days ago
Comment by genewitch 1 day ago
Comment by bigfishrunning 3 days ago
Comment by th0ma5 3 days ago
Comment by raw_anon_1111 3 days ago
But just today a bug was reported by a customer (we are still in testing not a production bug). I implemented this project myself from an empty git repo and an empty AWS account including 3 weeks of pre implementation discovery.
I reproduced the issue and through the problem at Claude with nothing but two pieces of information - the ID of the event showing the bug and the description.
It worked backwards looking at the event stream in the database, looking at the code that stored the event stream, looking at the code that generated the event stream (separate Lambda), looking at the actual config table and found the root cause in 3 minutes.
After looking at the code locally, it even looked at the cached artifacts of my build and verified that what was deployed was the same thing that I had locally (same lambda deployment version in AWS as my artifacts). I had it document the debug steps it took in an md file.
Why make life harder on myself? Even if it were something I was doing as a hobby, I have a wife who I want to spend time with, I’m a gym rat and I’m learning Spanish. Why would I waste 6 hours doing something that a computer could do for me in 5 minutes?
Assuming he has a day job and gets off at 6, he would be spending all of his off time chasing down a bug that he could be using doing something else.
Comment by grebc 3 days ago
If you’re experienced as you are, you’re not learning the same way a junior assigned this might learn from it.
Comment by raw_anon_1111 3 days ago
I also used Codex and asked questions about how the codebase worked to refresh my own memory. Why wouldn’t a junior developer do the same?
I mentioned that I had Codex describe in detail how it debugged it. It walked through each query it did, the lines of code it looked at and the IAC. It jogged my memory about code I wrote a year ago and after being on other projects
Comment by t43562 2 days ago
They start to make questionable decisions based on how they think things are. I have done this. Getting back into development allowed me to see what was going wrong, why changes were difficult and what we needed to do to test properly.
Hurray, you're an AI manager now but be careful how much you decide to not look "in the box" especially if you're trying to come up with release dates and so on.
Comment by raw_anon_1111 2 days ago
I treat AI just like a mid level developer ticket taker.
To a first approximation, no one gets ahead in corporate America or BigTech (been there done that) because they “codez real gud” and pull tickets off of a Jira board.
In the last decade+, I’ve been a early technical hire to lead a major initiative by a new manager/director/CTO respectively and none of them were interested in asking me questions about my coding ability. We spoke like seasoned professionals.
Even at my job in BigTech in the cloud consulting department (full time, RSU earning blue badge employee specializing in cloud + app dev) it was behavioral where they wanted to determine if I “were smart and got things done” [1]. My job after that where I am now as a staff consultant (full time employee) leading projects the interview was concerned about getting work done on time, on budget, meets requirements and whether the customer was happy with my results. Absolutely no one in the value chain cares about hand crafted bespoke code as long as it meets functional and non functional (security, scalability, usability, etc) requirements.
Comment by t43562 2 days ago
...and you want to get ahead.
But you've made a trade off and to think otherwise would be a mistake. Someone else has to straddle the line that you're floating above and obviously part of your job is to get hold of such people.
Comment by grebc 3 days ago
Just because it worked this time doesn’t mean it always will.
If you need further explanation of why you might want to spend more time resolving a bug to learn about the systems you’re tasked with maintaining then I’m at a loss sorry.
Comment by scarface_74 3 days ago
Comment by grebc 2 days ago
Comment by icedchai 2 days ago
I work with some junior level, outsourced developers that write prompts like "fix the tests." The result is, of course, bad. The consulting company charges $200+ hour for them. Garbage in, garbage out. Good thing I hit my retirement number. I can bail out anytime.
Comment by raw_anon_1111 2 days ago
From the contracting side, I’ve worked as a staff Aug contractor for six weeks when I was between jobs in 2012 and it was so bad I just walked off with no job lined up within four weeks.
For context staff Aug vs consulting is about who owns the project
Consulting = the customer gives you high level requirements and statement of work and you control the project (or your company)
Staff Aug = the client controls the project and you are a warm disposable body.
Comment by bdangubic 2 days ago
Comment by icedchai 1 day ago
Comment by LeCompteSftware 3 days ago
But he was doing this for education, not for work.
That's why he should spend 6 hours on it, and not give up and run to the gym. That's like saying "I shouldn't spend an hour at the gym this week, lifting weights is hard and I want to watch TV. I'll just get my forklift to lift the weights for me!"Comment by raw_anon_1111 3 days ago
Comment by derangedHorse 3 days ago
Comment by delbronski 3 days ago
Comment by AstroBen 3 days ago
This is exactly how you learn to create better abstractions and write clear code that future you will understand.
Comment by delbronski 2 days ago
Comment by bluefirebrand 3 days ago
Comment by civvv 2 days ago
Comment by delbronski 2 days ago
But in my long career even the smartest most experienced software engineers I’ve met m write their share of crazy abstractions from hell.
Comment by pizzafeelsright 3 days ago
I do the former for fun. The latter to provide for my family.
There is a reason old men take on hobbies like woodworking and fixing old cars and other stuff that has been replaced by technology.
Comment by KaiShips 2 days ago
Comment by bustah 2 days ago
Comment by JAG_Ecalona 2 days ago
Comment by joewongg 3 days ago
Comment by dang 3 days ago
(I swapped the title for the subtitle earlier because I thought it was more informative. What I missed was the flamebaity effect that "the old way" would have. Obvious in hindsight!)
Comment by sho_hn 3 days ago
Comment by justonceokay 3 days ago
Comment by phoronixrly 3 days ago
Comment by tayo42 3 days ago
Comment by SrslyJosh 3 days ago
You mean the way that the majority of code is still written by professionals?
Comment by sayYayToLife 3 days ago
Comment by biglio23 3 days ago
Comment by edjgeek 3 days ago
Comment by daneel_w 3 days ago
Comment by Izkata 2 days ago
Comment by sergiopreira 2 days ago
Comment by animanoir 2 days ago
Comment by huflungdung 2 days ago
Comment by mchusma 3 days ago
Comment by idle_zealot 3 days ago
Why would you think that? The landscape is fast-moving. Prompting tricks and "AI skills" of yesterday are already dated and sometimes actively counterproductive. The explicit goal of the companies working on the tech is to lower the barriers to entry and make it easier to use, building harnesses and doing refinement that align LLMs to an intuitive mode of interaction.
Do you think they'll fail? Do you think we've plateaued in terms of what using a computer looks like and your learnings for wrangling the agents of this year will be relevant for whatever the new hotness is next year? It's a strong claim that demands similarly strong argument to support.
Comment by aerhardt 3 days ago
How? I just open multiple terminal panes, use git tree, and then basically it’s good old software dev practices. What am I missing?
Comment by bensyverson 3 days ago
Comment by LeCompteSftware 3 days ago
Comment by aerhardt 2 days ago
I was asking if there was something about the “agentic” part in particular that was difficult.
Comment by baq 3 days ago
Comment by onair4you 3 days ago
Claude Opus is going to give zero fucks about your attempts to manage it.
Comment by bdangubic 3 days ago
Comment by sd9 3 days ago
It is hard indeed. I find it really quite exhausting.
Personally, I feel like I have always been a very competent programmer. I'm embracing the new way of working, but it seems like quite a different skillset. I somewhat believe that it will be relevant for a long time, because there is an incredibly large gap in outcomes between members of my team using AI. I've had good results so far, but I'm keen to improve.
Comment by sdevonoes 3 days ago
For the good stuff, there’s no alternative but to know and to have taste. Llms change nothing.
Comment by the_gipsy 3 days ago
Comment by dyauspitr 3 days ago
Comment by zingababba 3 days ago
Comment by slopinthebag 3 days ago
Comment by jodrellblank 2 days ago
Shouldn't they be the people making the most substantial, artisanal, sweat-of-the-brow deep-thought comments?
Comment by dinkumthinkum 3 days ago
Comment by Marazan 3 days ago
Citation needed.