There's yet another study about how bad AI is for our brains
Posted by speckx 1 day ago
Comments
Comment by fumar 1 day ago
Comment by palmotea 1 day ago
And arguably, our society has made a lot of bad choices about many "convenience[s] in modern life." For instance, cities should probably be designed to make you walk more by default, so healthy physical activity isn't turned into a chore you then have to have the discipline to do consistently.
Basically, collectively, we're stupid and unwise, picking short term convenience and neglecting the medium and long term, and we need to get better at that.
Comment by raxxorraxor 17 hours ago
Funny you should mention that. There was a HN post about the prompt similar to this: "I want to wash my car. The car wash is 100m away. Should I drive or walk?" - Was quite difficult for even frontier models. Surely they now do better, but it was quite entertaining reading the answers.
Comment by brazukadev 11 hours ago
Surely not https://claude.ai/share/b657c13d-0aed-4bb3-8250-c5ca4853dc42
Comment by pizza234 1 day ago
Comment by bayarearefugee 1 day ago
Comment by loremium 1 day ago
not only do students graduate now only due to chatgpt but also 10 year old kids never build up education while using ai to do their homework.
Comment by rbtms 1 day ago
Comment by red-iron-pine 1 day ago
the AI at home:
Comment by voidUpdate 1 day ago
Comment by econ 1 day ago
Or like the devs still on IRC.
Of course they will take the www and Google away from us by replacing everything with AI slop.
SO answers will be like, did you ask Macro Banana 42?
Comment by m_w_ 1 day ago
During university, for an exam in a graduate databases course, I had to manually calculate the number of operations for a query, down to the ones place. We were given an E-R diagram, the schema, and the query. So we had to act as the query planner - build out the B+ tree, check what was most efficient, and do it.
This is by all means a pointless endeavor - no one has had to do this by hand in literally decades. It was also among the hardest cognitive tasks I've ever had to do. After being one of two people to complete the exam in the three allotted hours, I sat outside the lecture hall on a bench for a little while because I though I might faint if I went any further.
I’m beginning to feel the same about writing code by hand. If I can design systems that are useful, performant, and largely maintainable, but the code is written by an LLM, is this harmful? It feels that I spend more time thinking about what problems need to be solved and how best to solve them, instead of writing idiomatic typescript. It’d be hard to convince me that’s a bad thing.
Comment by keysersoze33 1 day ago
Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).
Comment by abnercoimbre 1 day ago
Do we have well-informed suggestions as to why?
Comment by karmakurtisaani 1 day ago
Comment by abnercoimbre 1 day ago
"Everyone is moving fast, and here I am slow as a turtle."
Comment by aqme28 1 day ago
Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy?
edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.
Comment by avgDev 1 day ago
I think LLMs are a big problem for development of junior devs.(pun intended)
Comment by coffeefirst 1 day ago
Comment by rcore 1 day ago
Comment by floren 1 day ago
If anything, it's more like an over enthusiastic intern who'll go way down a rabbithole of self-doubt and overengineering when you're away at a conference for 3 days.
Comment by aqme28 1 day ago
Comment by boogieknite 1 day ago
Comment by kerblang 1 day ago
Comment by econ 1 day ago
Comment by fragmede 1 day ago
So what if they lie every once in a while?
Comment by boplicity 1 day ago
Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.
Comment by bluGill 1 day ago
Comment by fragmede 1 day ago
Comment by free-nachos 1 day ago
Im that style of working, spinning up multiple parallel workstreams appears to be the highest output strategy. So now I'm practicing rapid context switching, jumping from virtual desktop to virtual desktop, and even adding monitors to my desk to keep tabs on more workstreams.
In my home life, I've observed myself wandering off mid-task (reminder to self: the eggs on the stove DO NOT have the ability to wait idly for your next input), or pausing to make an unrelated voice note mid-conversation with a loved one (which does NOT feel good to anyone involved...)
I suspect I can get better as I learn more skills and practice. For example, there are people great at both the hours long tournament chess format, and the 2 minute bullet chess format.
But the fact that so I quickly went from being top tier at long term focus to not very good at focusing on anything gives me real pause...
Comment by boplicity 1 day ago
Comment by desecratedbody 1 day ago
Comment by voidUpdate 1 day ago
Comment by avgDev 1 day ago
I love coding for problem solving, and can do problems in my spare time. However, lately code is just work for me. It pays the bills.
Comment by 440bx 1 day ago
LLMs have yet to feature.
Comment by goalieca 1 day ago
Comment by MrDrMcCoy 1 day ago
Comment by zeroonetwothree 1 day ago
Comment by 440bx 1 day ago
Comment by zeroonetwothree 1 day ago
Comment by OnionBlender 1 day ago
Comment by forinti 1 day ago
With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing.
I'm pretty sure nobody will read the docs now.
Comment by austin-cheney 1 day ago
This sounds like something I have seen before: jQuery, Angular, React.
What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.
Comment by SamHenryCliff 1 day ago
HN Discussion Here:
Comment by Bridged7756 1 day ago
Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.
Comment by KevinMS 1 day ago
Comment by red-iron-pine 1 day ago
now imagine if most of them are using AI
Comment by SubiculumCode 1 day ago
Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...
Comment by _moof 1 day ago
Comment by ChrisArchitect 1 day ago
Comment by fragmede 1 day ago
Comment by etiam 1 day ago
Comment by tokai 1 day ago
Comment by SamHenryCliff 1 day ago
Comment by goalieca 1 day ago
Comment by red-iron-pine 1 day ago
I never will. I'll take the summary notes.
Comment by ethanrutherford 1 day ago
Comment by LLMCodeAuditor 1 day ago
Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup.
(https://metr.org/blog/2026-02-24-uplift-update/)This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen.
I am so glad I don't use this stuff.
Comment by econ 1 day ago
Comment by gjsman-1000 1 day ago
The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down.
And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.
Comment by BoneShard 1 day ago