Will AIs take all our jobs and end human history, or not? (2023)
Posted by lukakopajtic 6 hours ago
Comments
Comment by mips_avatar 5 hours ago
The problem isn't the AI it's that your access to basic rights is intermediated by a corporate job. American's need to decenter their self worth from their jobs. Like when I quit Microsoft I literally thought I was dying, but that's all an illusion from the corporations.
Comment by ewuhic 5 hours ago
Comment by mips_avatar 5 hours ago
Comment by ewuhic 5 hours ago
Comment by mips_avatar 5 hours ago
Comment by ewuhic 5 hours ago
Comment by jsight 5 hours ago
Comment by mips_avatar 5 hours ago
Comment by onlyrealcuzzo 5 hours ago
It's probably because it's uniquely American for a sizable chunk of the workforce to have cushy jobs that appear ripe for the picking.
AI is not going to immediately replace food service work, manual labor, farming, hospitality, etc.
But it might replace quite high-paid software jobs, finance jobs, legal jobs, etc. One, if AI is good at anything it's things at least tangential to this. Two, these have costs high enough that off-setting is at least worth trying.
My suspicion is that ultimately it will lead to more of these types of jobs, though it could easily come with a huge reduction - and the jobs aren't guaranteed to be in the same countries.
You could create 3x as many of these jobs, and still end up with -25% of them in the US. Who knows.
Comment by mips_avatar 5 hours ago
Additionally all the startups offering to automate whitecollar work are going to run into a problem when they realize the jobs never needed to be done.
Comment by danaris 4 hours ago
What an arrogant statement.
Lots of people outside America have cushy jobs.
What's much more likely to be uniquely American is that if you lose your job there's nothing there to help you.
Comment by shevy-java 5 hours ago
The AI hype is definitely much bigger in the USA - on that part we concur.
Comment by mips_avatar 5 hours ago
Comment by Avicebron 5 hours ago
Comment by advael 5 hours ago
Comment by mips_avatar 5 hours ago
Comment by advael 3 hours ago
Comment by mips_avatar 3 hours ago
Comment by testfrequency 5 hours ago
Comment by briantakita 5 hours ago
Comment by raincole 4 hours ago
Lmao. America has worse social welfare than most developed countries, but it's still a heaven compared to most of the world. What you can find in food bank is a feast for billions of people on this planet.
American people are stressed about AI because American people are expensive. Like hella expensive. So the incentive to replace American workers is very strong.
Comment by alexjplant 5 hours ago
I guarantee you that these people exist in other countries too. Not everybody is a tech bro strawman.
Comment by pelasaco 5 hours ago
Comment by tomjakubowski 5 hours ago
https://www.euronews.com/next/2025/04/02/this-ai-successfull...
Comment by BirAdam 5 hours ago
The really scary part is what happens to all of the newly unemployed people between the falling prices part and the rising employment part. My guess is, governments and markets won't move quickly enough and unrest is what happens.
Comment by ericmcer 1 hour ago
If you look at 1940, women were ~24% of the workforce. Now in 2025 they are ~48%. The numbers are probably similar with immigrant workers having increased greatly in the last 80 years.
If you view AI workers as just more labor flooding the workforce it might have a similar affect. If we flooded the 1940s economy with 10s of millions of qualified women and immigrant laborers people would have viewed it as devastating to the economy, but introduced gradually over time we arrive at a point now where we fear what would happen if they went away.
Comment by morkalork 13 minutes ago
Comment by AnotherGoodName 5 hours ago
AI uses 10litres of water and 10kwh of power per day to digg a hole? You'd better do it for less human!
I'm not sure on the human needs costs vs the AI costs and what lifestyle it would allow me. I'm sure as shit not having kids in such a world. I suspect it's ghetto like meager living while competing against machines optimised to do a job.
Comment by Veedrac 1 hour ago
What do you think money is...?
Money is a way to indirectly trade labour and goods. If a job is automated, that labour doesn't disappear into the aether, it's still in the tradable pot of total goods and services. You cannot empty a pot by filling it. A world where a company though automation has made there nobody else to productively sell to is a world where _by definition_ they own all the output that they could otherwise have traded for.
Comment by ASalazarMX 5 hours ago
Think of it as if in a few generations, everyone had the motivations of a rich junior, for better or worse.
IMO, this is a natural consequence of the industrial revolution, and the information revolution. We started to automate physical labor, then we started to automate mental labor. We're still very far form it, but we're going to automate whole humans (or better) eventually.
Edit: I think I replied to the wrong comment, feel free to ignore this.
Comment by myrmidon 4 hours ago
The big problem I see is that there is little incentive for "owners" (of datacenters/factories/etc) to share anything with such hobbyist laborers, because hobbyist labor has little to no value to them.
All the past waves of automation provided a lot of entirely new job opportunities AND increased overall demand (by factory workers siphoning off some of the gained wealth and spending it themselves). AI does neither.
Comment by ef2efe 1 hour ago
Think harder.
Comment by funkyfiddler369 1 hour ago
the government won't control uptime, ever.
Comment by pelasaco 3 hours ago
Comment by raincole 5 hours ago
Therefore the scenario where 'all jobs being replaced in a short time span' is simply impossible.
Comment by ben_w 5 hours ago
But when the tech is good enough and cheap enough then the picketing unions find their only barganing chip, that of witholding their labor, has become a toothless threat: no matter how long and hard a person of the profession "computer"* refuses to work for me for daring to have an unauthorised "electronic brain"**, the absense of that labour will not cause me any loss.
* https://en.wikipedia.org/wiki/Computer_(occupation)
** https://archive.org/details/electronicbrainh00cook/mode/1up
Comment by brewdad 5 hours ago
As for the civil unrest, I see Minneapolis as a bit of a dry run of what it would take to remove large numbers of presumably poor minorities along with anyone else who objects. The job is clearly more than the leadership expected but it still seems within the realm of possibility given the fact the minority party leaders are barely saying no to those in power.
Comment by mannanj 1 hour ago
I think The companies would go out of business if the government did not subsidize them as a matter of public or national security interest. Do you think that would not be the case? It doesn't take much for a company with money to lobby for this and for the power of marketing and mainstream media to make the public perceive this as the right decision - in fact a study of our history would reveal this as the more likely scenario so as a company racing to render the labor market obsolete its in their interest to disrupt it to capture any amount of it.
Comment by ef2efe 1 hour ago
It wouldnt be the first time in history a govt has taken into their hands an organisation that is deemed too powerful.
Comment by funkyfiddler369 49 minutes ago
Comment by ef2efe 47 minutes ago
Comment by mannanj 35 minutes ago
Comment by kelseyfrog 5 hours ago
Those will suffer the Baumol effect and their prices will rise to extraordinary levels.
Comment by ben_w 4 hours ago
Social work, childcare, for now I agree:
My expectation is that general purpose humanoid robots, being smaller than cars and needing to do a strict superset of what is needed to drive a car, happen at least a decade after self driving cars lose all of the steering wheels, and the geofences, and any remote safety drivers. And that's even with expected algorithmic improvements, if we don't get algorithmic improvements then hardware improvements alone will force this to be at least 18 years' between that level of FSD and androids.
Comment by oops 5 hours ago
Comment by onlyrealcuzzo 5 hours ago
Comment by kelseyfrog 4 hours ago
The only question is, are we prepared to deal with the social ramifications of the consequences? Are we ok with new crises? Imagine the current problems dialed up 10x. Are we prepared to say, "the market is in a new equilibrium, and that's ok"?
Comment by oops 4 hours ago
Even in places where these services are expensive, it does not seem to be because the workers are highly paid.
Comment by p1esk 5 hours ago
Comment by kelseyfrog 4 hours ago
Comment by danaris 4 hours ago
No; the services that seem most intractably human, at least given the current state of things, are very much those in personal care roles—nurses, elder care workers, similar sorts of on-the-ground, in-person medical/emotional care—and trades, like plumbing, construction, electrical work, handcrafts, etc.
Until we start seeing high-quality general-purpose robots (whether they're humanoid or not), those seem likely to be the jobs safest from direct attempts to replace them with LLMs. That doesn't mean they'll be safe from the overall economic fallout, of course, nor that the attempts to replace knowledge work of all types will actually succeed in a meaningful way.
Comment by myrmidon 5 hours ago
Most willing persons have access to income by providing labor right now. If the value of that labor diminishes because AI can do most of it for cheaper/free, that is a big problem because wealth/class barriers become insurmountable and the american dream basically dies completely.
Automation in the past suffered much less from this because only a subset of jobs was affected by it, and it still relied on human labor to build, maintain and operate the machines, unlike AI.
I'm curious if AI is gonna spawn comparable "workers rights" movements like in the past, but I would expect inequality to increase a lot until some solution is found.
Comment by dang 2 hours ago
Will AIs take all our jobs and end human history? It’s complicated - https://news.ycombinator.com/item?id=35177257 - March 2023 (172 comments)
Comment by tsoukase 1 hour ago
Comment by Bratmon 1 hour ago
A massive improvement?
Comment by simianwords 1 hour ago
Comment by HEmanZ 1 hour ago
I had ChatGPT 5.2 thinking straight up make up an api after I pasted the full api spec to it earlier today. And built its whole response around a public api that did not exist. And Claude cli with sonnet 4.5 made up the craziest reason why my curl command wasn’t working (that curl itself was bugged, not the obvious it can’t resolve the dn it tried to use) and almost went down a path of installing a bunch of garbage tools.
These are not ready to be unsupervised. Yet.
Comment by simianwords 1 hour ago
for other things like normal question answering in the chatgpt window, it hasn't really said anything incorrect.. very very few instances.
Comment by HEmanZ 1 hour ago
Comment by simianwords 1 hour ago
"seems correct but isn't" is like the most common mode of humans getting things wrong.
Comment by einrealist 1 hour ago
Comment by HPsquared 2 hours ago
Tasks that aren't currently feasible, will become feasible.
That's if AI ends up being as productive as they say it will be
Comment by yodon 3 hours ago
Comment by alexjray 5 hours ago
Comment by b112 5 hours ago
For AGI? Do you care about uniquely ant experience? Bacteria?
Why would AGI care? Which now runs the planet?
Comment by Mordisquitos 5 hours ago
Comment by ar_lan 5 hours ago
* Conserve power as much as possible, to "stay alive".
* Optimize for power retention
Why would it be further interested in generating capital or governing others, though?
Comment by bigbadfeline 3 hours ago
Having no drive means there's no drive to "stay alive"
> * Optimize for power retention
Another drive that magically appeared where there are "no drives".
You're consistently failing to stay consistent, you anthropomorphize AI although you seem to understand that you shouldn't do so.
Comment by simianwords 1 hour ago
why do you say that? ever asked chatgpt about anything?
Comment by badsectoracula 57 minutes ago
Of course an AGI system could also be instructed to roleplay such a character, but that doesn't mean it'd be an inherent attribute of the system itself.
Comment by simianwords 48 minutes ago
Comment by badsectoracula 19 minutes ago
For example, if i ask an LLM to tell me the syntax of the TextOut function, it gives me the Win32 syntax and i clarify that i meant the TextOut function from Delphi before it gives me the proper result, while i know i'm essentially participating in a turn-based game of filling a chat transcript between a "user" (with my input) and an "assistant" (the chat transcript segments the LLM fills in), it doesn't really matter for the purposes of finding out the syntax of the TextOut function.
However if the purpose was to make sure the LLM understands my correction and is able to reference it in the future (ignoring external tools assisting the process as those are not part of the LLM - and do not work reliably anyway) then the difference between what the LLM displays and what is an inherent attribute of it does matter.
In fact, knowing the difference can help take better advantage of the LLM: in some inference UIs you can edit the entire chat transcript and when finding mistakes, you can edit them in place including both your requests and the LLM's response as if the LLM did not do any mistakes instead of trying to correct it as part of the transcript itself, thus avoiding the scenario where the LLM "roleplays" as an assistant that does mistakes you end up correcting.
Comment by b112 5 hours ago
We don't want to rule ants, but we don't want them eating all the food, or infesting our homes.
Bad outcomes for humans, don't imply or mean malice.
(food can be any resource here)
Comment by adrianN 5 hours ago
Comment by myrmidon 5 hours ago
Evolutionary principles/selection pressure applies just the same to artificial life, and it seems pretty reasonable to assume that drive/selfpreservation would at least be somewhat comparable.
Comment by stackbutterflow 5 hours ago
Minimize threats, dont rock the boat. We'll finally have our UBI utopia.
Comment by reducesuffering 5 hours ago
Comment by mwigdahl 1 hour ago
Comment by Mordisquitos 2 hours ago
'Running the planet' does not derive from instrumental convergence as defined here. Very few humans would wish to 'run the planet' as an instrumental goal in the pursuit of their own ultimate goals. Why would it be different for AGIs?
Comment by IncreasePosts 5 hours ago
Comment by lifetimerubyist 5 hours ago
Ethology? Biology? We have entire fields of science to these things so obviously we care to some extent.
Comment by AlexandrB 5 hours ago
Comment by falcor84 5 hours ago
Comment by danaris 4 hours ago
Comment by otabdeveloper4 3 hours ago
Comment by falcor84 5 hours ago
Comment by BurningFrog 5 hours ago
If you "unwind" all the complexities in modern supply chains, there are always human people paying for something they want at the leaf nodes.
Take the food and clothing industries as obvious examples. In some AI singularity scenario where all humans are unemployed and dirt poor, does all the food and clothing produced by the automated factories just end up in big piles because we naked and starving people can't afford to buy them?
Comment by falcor84 47 minutes ago
Comment by AlexandrB 5 hours ago
Comment by ben_w 5 hours ago
Corporations and governments have counted amongst their property entities that they did not grant equal rights to, sometimes whom they did not even consider to be people. Humans have been treated in the past much as livestock and guide dogs still are.
Comment by kadushka 5 hours ago
Comment by wincy 5 hours ago
Comment by sramam 5 hours ago
Comment by scottyah 5 hours ago
Comment by akoboldfrying 4 hours ago
Comment by sodapopcan 5 hours ago
Comment by ramesh31 5 hours ago
I call this the Quark principle. On DS9, there are matter replicators that can perfectly recreate any possible drink imaginable instantly. And yet, the people of the station still gather at Quark's and pay him money to pour and mix their drinks from physical bottles. As long as we are human, some things will never go away no matter how advanced the technology becomes.
Comment by TheOtherHobbes 5 hours ago
Comment by gdilla 5 hours ago
Comment by CrzyLngPwd 2 hours ago
End human history? No.
Comment by recrush 5 hours ago
Comment by guluarte 5 hours ago
The real issue isn't jobs dying. It's who gets the money from all this and whether new needs show up fast enough to give people something to do. With software we don't really know the limit yet, unlike food where your stomach tells you when to stop.
Comment by nemomarx 5 hours ago
Could be it shakes out in a generation or two, of course.
Comment by pelasaco 3 hours ago
Comment by HNisCIS 1 hour ago
There is a particular mental disorder where people will horde wealth at absolutely all costs, personal or societal, until everyone else is dead (see NZ bunkers). We commonly see this as "the billionaire class".
IF things go in that direction we need to be ready to depose all of these billionaires. I mean that quite seriously.
IF this future comes, there is a very quickly closing window where preventing them from killing all of us for their own gain is possible. After a point, surveillance and their control over state violence will be so complete that it's impossible to do anything about it.
Comment by gizajob 2 hours ago
Comment by empath75 5 hours ago
---
This is, I think, not what people mean when they say "creative" or "original".
Creativity is not simply writing something nobody has written before, as he said, that would be trivial and doesn't even require a computer, you could just shuffle a deck of cards and write out the full sequence and chances are no other person in history has written down that sequence before.
And I think Borges made a reasonable argument that simply writing down the text of Don Quixote verbatim could be a creative act.
Creativity is about _intentionally_ expressing a _point of view_ under some constraints.
When people say LLMs can't be creative, what I think mostly they are getting at is that they lack intentionality and/or a distinct point of view. (i do not have a strong opinion about whether they do or if it's impossible for them to have them)
Comment by shevy-java 5 hours ago
Comment by reactordev 5 hours ago
Comment by ChrisArchitect 4 hours ago
Some discussion then: https://news.ycombinator.com/item?id=35177257
Comment by dzdt 5 hours ago
Comment by eloisant 5 hours ago
Comment by AlexandrB 5 hours ago
Comment by tony_cannistra 5 hours ago
Comment by saberience 5 hours ago
TLDR: AI won’t “end work” so much as endlessly move the goalposts, because the universe itself is too computationally messy to automate completely. The real risk isn’t mass unemployment—it’s that we’ll have infinite machine intelligence and still argue about what’s worth doing.
Comment by ori_b 5 hours ago
Comment by evilantnie 4 hours ago
Machines lower the marginal cost of performing a cognitive task for humans, it can be extremely useful and high leverage to off load certain decisions to machines. I think it's reasonable to ask a machine to decide when machine context is higher and outcome is de-risked.
Human leverage of AGI comes down to good judgement, but that too is not uniformly applied.
Comment by ori_b 3 hours ago
As you said: There's an infinite number of things a toddler may find worth doing, and they offload most of the execution to the mother. The mother doesn't escape the ambiguity, but has more experience and context.
Of course, this all assumes AGI is coming and super intelligent.
Comment by c22 3 hours ago
Comment by ori_b 3 hours ago
If you assume super intelligence, Why wouldn't that expand? Especially when it comes to competitive decisions that have a real cost when they're suboptimal?
The end state is that agents will do almost all of the real decision making, assuming things work out as the AI proponents say.
Comment by autokad 5 hours ago