There's yet another study about how bad AI is for our brains

Posted by speckx 1 day ago

Counter50Comment63OpenOriginal

Comments

Comment by fumar 1 day ago

It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.

Comment by palmotea 1 day ago

> It feels like every other convenience in modern life. We trade off some value for lack of human ability. Should you drive or walk or bike? In the US, most people drive and sit all day. Now we have fenced off part of our week for dedicated physical exercise to counteract physical atrophy.

And arguably, our society has made a lot of bad choices about many "convenience[s] in modern life." For instance, cities should probably be designed to make you walk more by default, so healthy physical activity isn't turned into a chore you then have to have the discipline to do consistently.

Basically, collectively, we're stupid and unwise, picking short term convenience and neglecting the medium and long term, and we need to get better at that.

Comment by raxxorraxor 17 hours ago

> Should you drive or walk or bike?

Funny you should mention that. There was a HN post about the prompt similar to this: "I want to wash my car. The car wash is 100m away. Should I drive or walk?" - Was quite difficult for even frontier models. Surely they now do better, but it was quite entertaining reading the answers.

Comment by brazukadev 11 hours ago

Comment by pizza234 1 day ago

I agree in principle, although I personally consider mental atrophy to be far more serious than physical atrophy (and I value physical fitness very high already!).

Comment by bayarearefugee 1 day ago

Great news for the AI providers, turns out they are automatically turning their audience into captives who end up increasingly dependent on their product to get anything done.

Comment by loremium 1 day ago

it's probably even worse, because knowledge is basically extracted out of all communities and eventually the rug pull comes where you're denied access one way or another.

not only do students graduate now only due to chatgpt but also 10 year old kids never build up education while using ai to do their homework.

Comment by rbtms 1 day ago

This might actually be a point for the need of sophisticated local AI.

Comment by red-iron-pine 1 day ago

mom: we have AI at home

the AI at home:

Comment by voidUpdate 1 day ago

"Hey kid, wanna try an LLM? First session is free"

Comment by econ 1 day ago

Right, soon we will be like those weirdos still running their own website while everyone else is on Facebook, reddit and X (making low effort comments complaining about a standardized set of topics like who got banned and why)

Or like the devs still on IRC.

Of course they will take the www and Google away from us by replacing everything with AI slop.

SO answers will be like, did you ask Macro Banana 42?

Comment by m_w_ 1 day ago

Obviously the discussion here is mostly about writing code. In that domain, I’m always of two minds on this sort of thing. Although I think everyone would agree that material cognitive decline is bad, I also think we have to be precise with what that means.

During university, for an exam in a graduate databases course, I had to manually calculate the number of operations for a query, down to the ones place. We were given an E-R diagram, the schema, and the query. So we had to act as the query planner - build out the B+ tree, check what was most efficient, and do it.

This is by all means a pointless endeavor - no one has had to do this by hand in literally decades. It was also among the hardest cognitive tasks I've ever had to do. After being one of two people to complete the exam in the three allotted hours, I sat outside the lecture hall on a bench for a little while because I though I might faint if I went any further.

I’m beginning to feel the same about writing code by hand. If I can design systems that are useful, performant, and largely maintainable, but the code is written by an LLM, is this harmful? It feels that I spend more time thinking about what problems need to be solved and how best to solve them, instead of writing idiomatic typescript. It’d be hard to convince me that’s a bad thing.

Comment by keysersoze33 1 day ago

Link to the preprint paper: https://arxiv.org/pdf/2604.04721

Worth reading the conclusion - makes a good point or two regarding the cumulative effect of using AI and not only the loss of the learning through struggle/time, but also the reference point of how long tasks should take without AI (e.g. we are no willing longer afford the time to learn the hard way, which will impact the younger generation notably).

Comment by abnercoimbre 1 day ago

> we are no willing longer afford the time to learn the hard way

Do we have well-informed suggestions as to why?

Comment by karmakurtisaani 1 day ago

I'm guessing because the hard way is the hard way?

Comment by abnercoimbre 1 day ago

Well, I'm sure we can peel at an onion here. That might be an obvious reason, but what about classic FOMO?

"Everyone is moving fast, and here I am slow as a turtle."

Comment by aqme28 1 day ago

Working with AI just feels like having a team of junior employees.

Is this the same effect that causes managers and people in power to sometimes become... (for lack of a better phrase) stupid and crazy?

edit: Everyone is responding to the "junior" part of my comment without addressing the actual question I'm asking. I should have just said "employees" -- Sorry.

Comment by avgDev 1 day ago

It doesn't. Juniors are generally SLOW because they are soaking up information and constantly learning. However, this allowed them to learn how to work through difficult problems, and how to communicate if they can't achieve their goals.

I think LLMs are a big problem for development of junior devs.(pun intended)

Comment by coffeefirst 1 day ago

I train up beginners pretty regularly and this is not a good analogy.

Comment by rcore 1 day ago

I honestly detest the junior employee analogy, AI is not and will never be like working with actual humans.

Comment by floren 1 day ago

Agreed, and I feel like it was pretty rare to distinguish junior devs before LLMs, we just used to talk about devs and senior devs. Then we needed a way to make sure it's understood that WE understand how dumb an LLM can be, so "junior" smashed its way into the discourse.

If anything, it's more like an over enthusiastic intern who'll go way down a rabbithole of self-doubt and overengineering when you're away at a conference for 3 days.

Comment by aqme28 1 day ago

I guess-- it feels like a junior dev in the sense that it has terrible self-direction, but is fully capable at the actual act of coding.

Comment by boogieknite 1 day ago

right. working with junior devs should include teaching which reinforces thinking and problem solving fundamentals

Comment by kerblang 1 day ago

How about "Working with AI just feels like having a team of junior employees who are completely unscrupulous, sychophantic and sometimes profoundly stupid psychopathic liars"?

Comment by econ 1 day ago

A team of fresh slaves.

Comment by fragmede 1 day ago

That you can be utterly awful to and they won't quit or feel sick. They'll never show up to work hung over or have a relative that needs surgery so they need an advance in pay and also they're never emotional because their partner of seven years broke up with them and their dog and cat and pet rabbit died. They'll never go to HR because you sexually harassed them, they'll work on your schedule and are available, in your house in your bed, at 4 am when inspiration hits so you pull out your laptop.

So what if they lie every once in a while?

Comment by boplicity 1 day ago

> "People’s persistence drops."

Has anyone else noticed this, as they've scaled up their AI coding use? I've found it harder to stay on task, and it's affected a broad range of my personal activities. I'm able to make incredible things happen with AI tools, but do worry about the personal costs.

Comment by bluGill 1 day ago

I think I'm more able to stay on task - when there is something hard I don't want to do I just tell the AI to figure it out. Previously I would find any excuse to procrastinate. For that matter while the AI is "thinking" I can read a book (unrelated fiction), but I'm still on task because the work is getting done.

Comment by fragmede 1 day ago

It predates LLMs though. It's after work and you're hanging out with friends, and someone asked about that one actress from that one thing. Do you struggle and think real hard and pull a name out of your brain with a bunch of effort, or do you just look it up in IMDB?

Comment by free-nachos 1 day ago

I have, absolutely, as I'm trying to learn the fully agentic style of development to keep up with the pace that a couple colleagues are setting.

Im that style of working, spinning up multiple parallel workstreams appears to be the highest output strategy. So now I'm practicing rapid context switching, jumping from virtual desktop to virtual desktop, and even adding monitors to my desk to keep tabs on more workstreams.

In my home life, I've observed myself wandering off mid-task (reminder to self: the eggs on the stove DO NOT have the ability to wait idly for your next input), or pausing to make an unrelated voice note mid-conversation with a loved one (which does NOT feel good to anyone involved...)

I suspect I can get better as I learn more skills and practice. For example, there are people great at both the hours long tournament chess format, and the 2 minute bullet chess format.

But the fact that so I quickly went from being top tier at long term focus to not very good at focusing on anything gives me real pause...

Comment by boplicity 1 day ago

I too have noticed the shift in completely different contexts. Definitely gives me real pause. Mental acuity and sharpness is so important; it's the foundation of who we are as people...

Comment by desecratedbody 1 day ago

This is why everyone needs to implement "Rawdog Thursdays" as I call it, in which you write code without the assistance of AI (i.e., you are "rawdogging" your professional output).

Comment by voidUpdate 1 day ago

How about you take it even further and implement "Rawdog Weeks", where every day for the week you write code without the assistance of LLMs, and you repeat that every week. That way you won't be able to develop any kind of dependance

Comment by avgDev 1 day ago

I use chatGPT instead of googling. I honestly don't think this is necessary at all. The job has joined, we have a new tool in our arsenal.

I love coding for problem solving, and can do problems in my spare time. However, lately code is just work for me. It pays the bills.

Comment by 440bx 1 day ago

I'm taking the radical approach of starting with the problem and finding a solution rather than start with a solution and hit all your problems with it.

LLMs have yet to feature.

Comment by goalieca 1 day ago

Did people forget that practice makes perfect? The best way for someone to level up is to go get their hands dirty and dive into everything themselves.

Comment by MrDrMcCoy 1 day ago

One of my math teachers said that practice doesn't make perfect. Practice makes permanent. You can practice and reinforce the wrong thing until that's all you know.

Comment by zeroonetwothree 1 day ago

I do it for around 20% of my PRs. However my employer is complaining that my numbers are below their 100% target. So I am being penalised for trying to keep my skills up.

Comment by 440bx 1 day ago

Your employer is a fucking moron.

Comment by zeroonetwothree 1 day ago

Tell me something I don’t know

Comment by esafak 1 day ago

Businesses can remain irrational for longer than you can stay solvent.

Comment by 440bx 1 day ago

My life has mostly been making that as not true as possible :)

Comment by OnionBlender 1 day ago

How are they measuring it? My boss gave me a hard time because I wasn't using enough of my token budget. How do they know what percentage of your pull request was AI written?

Comment by forinti 1 day ago

I already had the impression that auto-complete was bad for programmers, since I've many times seen coders brute-force it until they found something that looked like it would do.

With AI I've also witnessed people go crazy going back and forth without even looking carefully at the code (or the compile messages) to figure out what was missing.

I'm pretty sure nobody will read the docs now.

Comment by austin-cheney 1 day ago

So I guess when employers force AI use by their developers those developers progress towards worthlessness in that they will produce wrong code, not know the difference, not care about the resulting harm, and finally not even try to course correct if AI is removed.

This sounds like something I have seen before: jQuery, Angular, React.

What the article misses is the consequence of destroyed persistence. Once persistence is destroyed people tend to become paranoid, hostile, and irrationally defensive to maintain access to the tool, as in addiction withdrawal.

Comment by SamHenryCliff 1 day ago

This directly contradicts the statements made by Sal Khan. Children are being harmed by his push. This is very troubling.

HN Discussion Here:

https://news.ycombinator.com/item?id=47788845

Comment by Bridged7756 1 day ago

I personally find that LLMs help me store my mental energy to later put into more (personally) fruitful endeavors. Instead of being too tired to contribute to OSS, write, do other things at the end of the day, I find I can leave more juice for the end of the day after work hours, or just at work, I can move faster thus I utilize that extra time and energy into stuff like Anki, ups killing, etc.

Just as anything, I believe the dose is the poison. I still find myself thinking about the high-level and decisions, but I spend less cognitive load into library, implementation specifics I can put somewhere else.

Comment by 1 day ago

Comment by KevinMS 1 day ago

I can't wait to be one of the last thinking humans.

Comment by red-iron-pine 1 day ago

"you are the 10 people you talk to the most" -- not always true, but broadly

now imagine if most of them are using AI

Comment by SubiculumCode 1 day ago

Reminder: Human cognition is complex and determining whether something is "good" or "bad" won't come from 1 or 2 studies.

Point for discussion: We know that task and context switching imposes substantial cognitive costs, leading to lower and slower performance for a time. I think it is may be reasonable to hypothesize that interacting with a LLM to solve tasks tends to focus the brain on a more strategic level of focus. What do I want to solve? What is my goal? Actually solving individual problems is very different. It is more concrete and mechanistic, requiring a different mode of thought. Switching from the former to the latter is a cognive task switch, where the context changes, and resetting into the new context takes time and that imposes costs. Unless they had a control arm that imposed a task switching cost...

Comment by _moof 1 day ago

Interesting. Seems analogous to the atrophy of navigation abilities caused by over-reliance on GPS. I wonder if there's a common underlying mechanism.

Comment by ChrisArchitect 1 day ago

Comment by fragmede 1 day ago

I sent the study to ChatGPT for analysis and it told me not to worry about it so I'm not gonna.

Comment by etiam 1 day ago

Thank you for your corroboration.

Comment by tokai 1 day ago

And the amount of people that can recite Homer by heart has collapsed since writing came along.

Comment by SamHenryCliff 1 day ago

…and we are worse off for celebrating that instead of considering the utility of reaching our intellectual potential as individuals and society.

Comment by goalieca 1 day ago

And now the number of people who car read Homer and study it are dropping to 0. They just want the summary notes without any deep thought and the reward that comes with it.

Comment by red-iron-pine 1 day ago

someone out there, probably a professor of classics, can pay for a house with Homer.

I never will. I'll take the summary notes.

Comment by ethanrutherford 1 day ago

Basing what you consider important to know solely on "what can make me money" is a very self-sabotage way to live life.

Comment by LLMCodeAuditor 1 day ago

Related:

  Unfortunately, given participant feedback and surveys, we believe that the data from our new experiment gives us an unreliable signal of the current productivity effect of AI tools. The primary reason is that we have observed a significant increase in developers choosing not to participate in the study because they do not wish to work without AI, which likely biases downwards our estimate of AI-assisted speedup.
(https://metr.org/blog/2026-02-24-uplift-update/)

This was a huge red flag! Within a year a large majority of devs became so whiny and lazy that METR couldn't fill the "no AI" bucket for their study - it's not like this was a full-time job, just a quick gig, and it was still too much effort for their poor LLM-addled brains. At the time I thought it was a terrible psychological omen.

I am so glad I don't use this stuff.

Comment by econ 1 day ago

The next study will be just to ask and measure how fast they run away.

Comment by gjsman-1000 1 day ago

All fun and games until the first time someone successfully sues an employer who mandated it and wins a mental health claim.

The moment that happens, insurance flips tables, OSHA starts asking if they need exposure controls, and employers back down.

And that’s the good scenario! The bad scenario is an employer mandated it, and someone mentally declined to the point they committed a public act of violence.

Comment by BoneShard 1 day ago

and the last piece of the remaining work moves to a place with less strict mandates.