Please don't say mean things about the AI I just invested a billion dollars in

Posted by randycupertino 22 hours ago

Counter633Comment294OpenOriginal

Comments

Comment by seizethecheese 21 hours ago

> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society

The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.

I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

Comment by muvlon 21 hours ago

The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

Comment by pixl97 20 hours ago

Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.

Comment by ethbr1 17 hours ago

> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.

Comment by mogsor 11 hours ago

AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary

Comment by vrighter 4 hours ago

If it can generate porn, it can do so because it was explicitly trained on porn. Therefore the system was designed to generate porn. It can't just materialize a naked body without having seen millions of them. they do not work that way.

Comment by pixl97 6 hours ago

I hate to be a smartass, but do you read the stuff you type out?

>Grok doesn't generate nude pictures of women because it wants to,

I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.

What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.

It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.

How many things do you do because society programmed you that way, and you're unable to think outside that programming?

Comment by n8cpdx 19 hours ago

You’re way off base. It can also create sexually explicit pictures of men.

Comment by abustamam 15 hours ago

Not sure if you're being sarcastic, but women are disproportionately affected by this than men.

Comment by apublicfrog 2 hours ago

That sounds like it could be true, but do you have any actual evidence of that?

Comment by sunaookami 12 hours ago

So everything that was already done before generative AI.

Comment by mogsor 11 hours ago

It's true, making these things easier and faster and more accessible really doesn't matter

Comment by collingreen 4 hours ago

That's a bonkers take.

Am I misunderstanding you or are you somehow saying anything done in the past is fine to do more of?

Comment by yifanl 2 hours ago

Poe's Law, mate.

Comment by notanastronaut 6 hours ago

>The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

When I think of the internet, I think of malware, porn, social media manipulating people, flame wars, "influencers", and more.

It is also used to scam the elderly, sharing photoshopped sexually explicit pictures of men, women, and children, without their consent, stealing all kinds of copyrighted material, and definitely sucking the joy out of everything. Revenge porn wasn't started in 2023 with OpenAI. And just look at META's current case about Instagram being addicting and harmful to children. If "AI" is a tech carcinogen, then the internet is a nuclear reactor, spewing radioactive material every which way. But hey, it keeps the lights on! Clearly, a net positive.

Let's just be intellectually consistent, that's all I'm saying.

Comment by taurath 20 hours ago

> I get that this is satire, but satire has to have some basis in truth.

Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.

Comment by panda-giddiness 11 hours ago

It's satire. It's supposed to be absurd. Why else do students still read A Modest Proposal nearly three hundred years after its publication?

Regardless, LLMs are already being abused to mass produce spam, and some of that spam has almost certainly been employed to separate the elderly from their savings, so there's nothing particularly implausible about the satirical product, either.

Comment by tim333 13 hours ago

The original the article is spoofing is interviewers asking Huang about the narrative that:

>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436

and he says a "end of the world narrative science fiction narrative" is hurtful.

Comment by ajkjk 20 hours ago

if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.

Comment by jychang 20 hours ago

Yeah. Example: stripper poles. Or hitachi magic wands.

Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.

Comment by pluralmonad 20 hours ago

I'm super confused what harms come from stripper poles and vibrators. I am prepared to accept that the joke might have gone right over my head.

Comment by ajkjk 16 hours ago

I don't get the jump either but it was certainly lateral enough to be amusing

Comment by wizardforhire 20 hours ago

To be fair to the magic wands, thats why “massagers” were invented in the first place. [1] [2] [3]

[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...

[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...

[3] https://en.wikipedia.org/wiki/Female_hysteria

Comment by irishcoffee 15 hours ago

And I'll go out on a limb and say the first person to use a pole resembling a fire pole in the fireman vs stripper debate was probably the stripper!

Comment by blibble 19 hours ago

how many front rooms have you walked into that had a stripper pole?

(also: what city? for a friend...)

Comment by anonymars 20 hours ago

> you...could have done something about it

What is it that isn't being done here, and who isn't doing it?

Comment by ajkjk 16 hours ago

In this case we're debating whether one of the purposes of AI is to scan the elderly. Probably 'purpose' is not quite the right word, but the point would be: it is not the purpose of AI to not scam the elderly (or it would explicitly prevent that).

(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)

Comment by rgmerk 20 hours ago

My hypothesis: Generative AI is, in part, reaping the reaction that cryptocurrency sowed.

Comment by drzaiusx11 20 hours ago

Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"

If it's not happening yet, it will...

Comment by evandrofisico 20 hours ago

It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.

Comment by bandrami 19 hours ago

It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.

Comment by tremon 6 hours ago

The article doesn't specify which elderly they're referring to. They've certainly successfully captured the gerontocrats in Washington and Wall Street that keep bouying their assets.

Comment by ryan_lane 21 hours ago

Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).

Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.

Comment by seizethecheese 21 hours ago

Not at all. I’m saying AI doesn’t exist to scam elderly, which is saying nothing about whether it’s dangerous in that respect.

Comment by only-one1701 20 hours ago

Perhaps you’ve heard that the purpose of a system is what it does?

Comment by the_snooze 20 hours ago

Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.

Comment by jacquesm 20 hours ago

That's because they were thinking about their stock options instead.

Comment by johnnyanmac 4 hours ago

That's a small part on why people became more cynical of tech over the decades. At least with the internet there were large efforts to try and nail down security in the early 00's. Imagine if we instead left it devolve into a moderator-less hellscape where every other media post is some goatse style jump scare.

That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.

Comment by rcxdude 20 hours ago

This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.

Comment by irjustin 20 hours ago

In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.

Comment by solid_fuel 20 hours ago

> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.

Comment by pixl97 20 hours ago

I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.

Comment by paulryanrogers 19 hours ago

Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.

Comment by johnnyanmac 4 hours ago

>that is more than enough motivation for our more conservative congress members to ban the internet in the first place

Yes, and now porn is highly regulated. Maybe that's a hint?

Comment by christianqchung 20 hours ago

Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.

Comment by jacquesm 20 hours ago

The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.

Comment by only-one1701 20 hours ago

I was going to reply to the post above but you said it perfectly.

Comment by NicuCalcea 20 hours ago

I can't think of many other reasons to create voice cloning AI, or deepfake AI (other than porn, of course).

Comment by rgmerk 19 hours ago

There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.

Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.

Comment by wk_end 20 hours ago

No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”

Comment by username223 19 hours ago

An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.

Comment by burnto 20 hours ago

Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.

Comment by criley2 21 hours ago

Sure, phones aren't directly doing the scamming, but they're supercharging the ability to do so.

Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.

Therefore, phones are bad?

This is of course before we talk about what criminals do with money, making money truly evil.

Comment by only-one1701 20 hours ago

Without phones, we couldn’t talk to people across great distances (oversimplification but you get it).

Without Generative AI, we couldn’t…?

Comment by simianwords 13 hours ago

Whats the big deal in talking to people across great distances? We can live without it.

Comment by shepherdjerred 20 hours ago

Are you really implying that generative AI doesn't enable things that were not previously possible?

Comment by Larrikin 20 hours ago

It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.

I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.

People have been making nude celebrity photos for decades now with just Photoshop.

Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.

Comment by shepherdjerred 5 hours ago

Would it be fair to say a car or plane aren’t significant then, given we could always traverse by horse or boat?

Comment by simianwords 13 hours ago

What did internet bring?

Comment by jamiek88 20 hours ago

Name some then! I initially scoffed too but I can only think of stuff LLM’s make easier not things that were impossible previously.

Comment by pixl97 19 hours ago

Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.

People seemingly have some very odd views on products when it comes to AI.

Comment by freejazz 19 hours ago

> were not previously possible?

How obtuse. The poster is saying they don't enable anything of value.

Comment by queenkjuul 20 hours ago

For the most part, it hasn't. What do you consider previously impossible, and how is it good for the world?

Comment by solid_fuel 20 hours ago

Can you name one thing generative AI enables that wasn't previously possible?

Comment by 20 hours ago

Comment by pixl97 19 hours ago

Can you name one thing a plow enables that wasn't previously possible?

This line of thinking is ridiculous.

Comment by Larrikin 14 hours ago

A plow enables you to till land you couldn't before with your bare hands.

The phone let's you talk to someone you couldn't before when shouting can't.

ChatGPT let's you...

Please complete the sentence without an analogy

Comment by pixl97 6 hours ago

>A plow enables you to till land you couldn't before with your bare hands.

It does not. You could still till the land with hand tools. You just get a lot more done.

ChatGPT let's me program in languages I was not efficient in before.

Anyway, I'm done with your technology purity contest, it has about zero basis in reality.

Comment by Larrikin 5 hours ago

Why are you so mad? You're the only one in these comments dismissing arguments because you don't like them. Are you invested?

Comment by pixl97 5 hours ago

No. I'm just stating that a huge portion of these comments have their own emotional investment and are confusing OUGHT/IS. On top of that their arguments aren't particularly sound, and if they were applied to any other technologies that we worship here in the church of HN would seem like an advanced form of hypocrisy.

Comment by simianwords 13 hours ago

This conversation is naive and simplifies technologies into “does it achieve something you otherwise couldn’t”.

The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.

Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?

Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.

It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.

Comment by mcv 13 hours ago

...generate piles of low quality content for almost free.

AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.

I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.

Comment by pixl97 6 hours ago

There is a particular problem that comes with your line of thinking and why AI will never be able to solve it. In fact it's not a solved human problem either.

And that is slop work is always easier and cheaper than doing something right. We can make perfectly good products as it is, yet we find Shien and Temu filled with crap. That's not related to AI. Humans drown themselves in trash whenever we gain the technological capability to do so.

To put this another way, you cannot get a 10x speed up in high quality work without also getting a 1000x speed up in low quality work. We'll pretty much have to kill any further technological advancement if that's a showstopper for you.

Comment by criley2 10 hours ago

They tilled by hand for thousands of years before inventing a plow to speed it up.

They spoke slowly, through letters, until phones sped it up.

We coded slowly, letter by letter, until agents sped it up.

Comment by JumpCrisscross 21 hours ago

> Therefore, phones are bad?

Phones are utilities. AI companies are not.

Comment by mrnaught 19 hours ago

>> enabled by the internet, does the internet exist for this purpose? Of course not.

I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.

Comment by lostmsu 4 hours ago

So did the Internet

Comment by solid_fuel 21 hours ago

LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:

- advertising

- astroturfing

- other forms of botting

- scamming old people out of their money

Comment by ajross 21 hours ago

> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

True, but no more true than it is if you replace the antecedent with "people".

Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.

Comment by solid_fuel 20 hours ago

> True, but no more true than it is if you replace the antecedent with "people".

Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.

Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]

[0] https://arxiv.org/abs/2401.11817

Comment by TheOtherHobbes 20 hours ago

The suggestion that hallucinations are avoidable in humans is quite a bold claim.

Comment by CamperBob2 20 hours ago

What you (and the authors) call "hallucination," other people call "imagination."

Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.

Comment by blibble 19 hours ago

what I call it is "buggy garbage"

it's not a person, it doesn't hallucinate or have imagination

it's simply unreliable software, riddled with bugs

Comment by CamperBob2 16 hours ago

(Shrug) Perhaps other sites beckon.

Comment by fao_ 20 hours ago

> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.

Comment by ajross 20 hours ago

> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.

Comment by TheOtherHobbes 20 hours ago

It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.

Comment by 21 hours ago

Comment by echelon 20 hours ago

It's easily doubled my productivity as an engineer.

As a filmmaker, my friends and I are getting more and more done as well:

https://www.youtube.com/watch?v=tAAiiKteM-U

https://www.youtube.com/watch?v=oqoCWdOwr2U

As long as humans are driving, I see AI as an exoskeleton for productivity:

https://github.com/storytold/artcraft (this is what I'm making)

It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.

I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Apart from all the other madness in the world, this is the one thing that has been a dream come true.

As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.

There's financial capital and there's labor capital. AI is a force multiplier for labor capital.

Comment by navigate8310 20 hours ago

> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.

Comment by blks 20 hours ago

So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.

Comment by echelon 20 hours ago

> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.

I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.

Here's a really old example of what that looks like (the models are a lot better at this now) :

https://www.youtube.com/watch?v=QYVgNNJP6Vc

There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.

Comment by heliumtera 17 hours ago

[flagged]

Comment by CyberDildonics 1 hour ago

I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Do you know anything about "Hollywood grade VFX" ? Have you ever worked for any company that does it?

No more nepotism in Hollywood

Do you think "Hollywood VFX" is full of nepotism?

Comment by jacquesm 20 hours ago

As a rule real creativity blossoms under constraints, not under abundance.

Comment by prewett 4 hours ago

But new media also lets creativity blossom. The printing press eventually enabled novels through cost reduction. Prussian blue pigment is a large part of ukyio-e's attraction; it got used a lot because it was new and was a better blue. The Gothic arch's improved strength compared to the circular arch enabled cathedrals with huge windows. Concrete enabled all sorts of fluid architecture; Soviet bus stations, for instance [1].

[1] https://www.russiabeyond.com/arts/327147-10-best-soviet-bus-...

Comment by echelon 19 hours ago

Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.

But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.

Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.

Comment by gllmariuty 20 hours ago

> AI is a force multiplier for labor capital

for an 2011 account that's a shockingly naive take

yes, AI is a labor capital multiplier. and the multiplicand is zero

hint: soon you'll be competing not with humans without AI, but with AIs using AIs

Comment by Terr_ 19 hours ago

Even if it's >1, it doesn't follow that it's good news for the "labor capitalist".

"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"

Comment by queenkjuul 20 hours ago

Genuine question: does the agent work for you if you didn't build it, train it, or host it?

It's ostensibly doing things you asked it, but in terms dictated by its owner.

Comment by blibble 20 hours ago

indeed

and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding

and you're even paying them to replace you

Comment by heliumtera 17 hours ago

always good to be in the pick and shovel biz

Comment by simianwords 13 hours ago

Extremely exaggerated comment. LLMs dont hallucinate that much. That doesn’t rule them out of any control loop.

I mean, I think you have not put much thought into your theory.

Comment by awesome_dude 21 hours ago

I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.

The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)

Comment by johnnyanmac 4 hours ago

>I get that this is satire, but satire has to have some basis in truth.

The Trump administration is using AI generated imagery to advance his narrative, and it seems like it's a thing that mostly the elderly would fall for. So yes, there is some truth to it.

In general, the elderly will always be more vulnerable to technological exploitation.

Comment by thefz 14 hours ago

> The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

But did it accelerate the whole process? Hell yeah.

Comment by 18 hours ago

Comment by gosub100 21 hours ago

It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.

Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.

Comment by JumpCrisscross 20 hours ago

> the voice and video impersonation is definitely being used to scam elderly people

And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.

Comment by techblueberry 18 hours ago

Porn was enabled by the internet’s but does the internet exist for this purpose?

Yes. Yes it does. That is the satire.

Comment by weebull 12 hours ago

> Why you think the net was born? > Porn porn porn

Comment by wat10000 19 hours ago

They're used for scams. Isn't that the basis in truth you're looking for in satire?

Before this we had "the internet is for porn." Same sort of exaggerated statement.

Comment by ryanobjc 21 hours ago

I mean... explain sora.

Comment by internet101010 20 hours ago

Revolutionizing cat memes

Comment by popalchemist 17 hours ago

While the employees of the companies that make AI may have noble, even humanity-redeeming/saving intentions, the billionaire class absolutely has bond-villain level intentions. The destruction of the middle class and the removal of all livable-wage jobs is absolutely part of the techno-feudalist playbook that Trump, Altman, Zuckerberg, etc are intentionally moving toward. I'd say that is a scam. They want to recreate the conditions of earlier society - an upper class (them, who own the entire means of production and can operate the entire machine without the need for peons' input) who does whatever they want because the lower class is incapable of opposing them.

If you aren't familiar, look into it.

Comment by GoodJokes 16 hours ago

[dead]

Comment by some_furry 21 hours ago

[flagged]

Comment by ameliaquining 21 hours ago

The person you're replying to is probably not personally a major AI magnate.

Comment by thegrim000 21 hours ago

You mean the guy that has in his bio "YC and VC backed founder" and has made multiple posts in the last couple months dismissing different negative thoughts about AI? Yeah that guy probably doesn't have significant funds tied up in the success of AI.

Comment by seizethecheese 21 hours ago

I don’t, actually, unless you call index funds “tied up”.

To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.

Comment by Brian_K_White 20 hours ago

One thing this characterization is not is honest.

Comment by seizethecheese 19 hours ago

What part is not honest?

Comment by Barrin92 20 hours ago

in this case a more appropriate term for the mob is "the people" because one defining dynamic of the rollout of this technology is that a minority of people seem to be extremely invested to shove it into the faces of a majority of people who don't want it, and then claim that they are visionaries and everyone else is 'the mob'.

Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it

I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem

Comment by shimman 21 hours ago

It becomes insulting when they think we're this foolish.

Comment by some_furry 21 hours ago

No, but the attitude is congruent, even if they don't have the investment money lying around to fill the shoes exactly.

Comment by 21 hours ago

Comment by 21 hours ago

Comment by gllmariuty 21 hours ago

article forgot to mention the usual "think about the water usage"

Comment by Retric 21 hours ago

What’s the point of attacking a straw man while ignoring the actual points being brought up?

The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.

Comment by seizethecheese 21 hours ago

It mentions ecological destruction, which I must say is way better than water usage, AI is a power hog after all.

Comment by rootnod3 21 hours ago

If it's the "usual reply", maybe it's because....I dunno...water is kinda important?

Comment by queenkjuul 20 hours ago

I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.

Comment by vitajex 18 hours ago

> satire has to have some basis in truth

In order to be funny at least!

Comment by quantum_state 20 hours ago

Viewed from historical perspective, big tech is really reaping the benefits of the intellectual wealth accumulated over many thousands of years by humanity collectively. This should be recognized to find a better path forward.

Comment by mediaman 19 hours ago

How? They are all losing tens of billions of dollars on this, so far.

Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

There doesn't appear to be any moat.

This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.

NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.

Comment by TheColorYellow 19 hours ago

I'm not so sure thats correct. The Labs seem to offer the best overall products in addition to the best models. And requirements for models are only going to get more complex and stringent going forward. So yes, open source will be able to keep up from a pure performance standpoint, but you can imagine a future state where only licensed models are able to be used in commercial settings and licensing will require compliance against limiting subversive use or similar (e.g. sexualization of minors, doesn't let you make a bomb etc.).

When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).

There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.

Comment by cj 18 hours ago

If that's the case, the winner will likely be cloud providers (AWS, GCP, Azure) who do compliance and enterprise very well.

If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.

Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.

Comment by tru3_power 18 hours ago

What’s the purpose of licensing requiring though things though if someone could just use an open source model to do that anyway? If someone were going to do those things you mentioned why do it through some commercial enterprise tool? I can see maybe licensing requiring a certain level of hardening to prevent prompt injections, but ultimately it still really comes down to how much power you give the model in whatever context it’s operating in.

Comment by gizmodo59 19 hours ago

Nvda is not the only exception. Private big names are losing money but there are so many public companies seeing the time of their life. Power, materials, dram, storage to name a few. The demand is truly high.

What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.

Comment by dpc050505 5 hours ago

The opportunity cost of the billions invested in LLMs could lead one to argue that the benefits are negative.

Think of all the scientific experiments we could've had with the hundreds of billions being spent on AI. We need a lot more data on what's happening in space, in the sea, in tiny bits of matter, inside the earth. We need billions of people to learn a lot more things and think hard based on those axioms and the data we could gather exploring what I mention above to discover new ones. I hypothesize that investing there would have more benefit than a bunch of companies buying server farms to predict text.

CERN cost about 6 billions. Total MIT operations cost 4.7 billions a year. We could be allocating capital a lot more efficiently.

Comment by CrossVR 18 hours ago

I believe that eventually the AI bubble will evolve in a simple scheme to corner the compute market. If no one can afford high-end hardware anymore then the companies who hoarded all the DRAM and GPUs can simply go rent seeking by selling the computer back to us at exorbitant prices.

Comment by mikestorrent 18 hours ago

The demand for memory is going to result in more factories and production. As long as demand is high, there's still money to be made in going wide to the consumer market with thinner margins.

What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.

Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.

https://www.openchipletatlas.org/

Comment by digiown 18 hours ago

That makes no sense. If the bubble bursts, there will be a huge oversupply and the prices will fall. Unless all Micron, Samsung, Nvidia, AMD, etc all go bankrupt overnight, the prices won't go up when demand vanishes.

Comment by charcircuit 18 hours ago

There is a massive undersupply of compute right now for the current level of AI. The bubble bursting doesn't fix that.

Comment by digiown 6 hours ago

There is a massive over-buying of compute, much beyond what is actually needed for the current level of AI development and products, paid for by investor money. When the bubble pops the investor money will dry up, and the extra demand will vanish. OpenAI buys memory chips to stop competitors from getting them, and Amazon owns datacenters they can't power.

https://www.bloomberg.com/news/articles/2025-11-10/data-cent...

Comment by jredwards 5 hours ago

I'd like to see evidence that open models are closing that gap. That would be promising.

Comment by charcircuit 19 hours ago

>losing tens of billions

They are investing 10s of billions.

Comment by bigstrat2003 18 hours ago

They are wasting tens of billions on something that has no business value currently, and may well never, just because of FOMO. That's not what I would call an investment.

Comment by charcircuit 16 hours ago

Many investments may lose money, but the EV here is positive due to the extreme utility that AI can and is bringing.

Comment by bandrami 18 hours ago

They are washing 10s of billions of dollars an an industry-wide attempt to keep the music playing

Comment by yowlingcat 19 hours ago

I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.

Comment by gruez 19 hours ago

>Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.

Comment by edoceo 18 hours ago

If the bubble is over all the built infrastructure would become cheaper to train on? So those open models would incenerate less? Maybe there is an increase of specialist models?

Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.

Comment by bandrami 18 hours ago

No, if the bubble ends the use of all that built infrastructure stops being subsidized by an industry-wide wampum system where money gets "invested" and "spent" by the same two parties.

Comment by edoceo 14 hours ago

I feel like that was happening for the fiber-backhaul in 1999. Just different players.

Comment by compounding_it 18 hours ago

Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.

Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.

Comment by mikestorrent 18 hours ago

Not sure I totally follow. I'd love to better understand why companies are open sourcing models at all.

Comment by fHr 18 hours ago

The other side of the market:

Comment by ulfw 19 hours ago

It's turning out to be a commodity product. Commodity products are a race to the bottom on price. That's how this AI bubble will burst. The investments can't possibly show the ROIs envisioned.

As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.

Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user

Comment by derektank 19 hours ago

Isn’t the reason we have a public domain so that people aren’t in a perpetual debt to their intellectual forebears?

Comment by gruez 19 hours ago

Copyrights last a very long time. Moreover nothing says it has to be open. The recipe to coke is still secret.

Comment by edoceo 18 hours ago

Comment by bandrami 18 hours ago

The recipe to Coca Cola is not copyrighted (recipes in general can't be) but is protected by trade secret laws, which can notionally last forever.

The recipe also isn't that much of a secret, they read it on the air on a This American Life episode and the Coca Cola spokesperson kind of shrugged it off because you'd have to clone an entire industrial process to turn that recipe into a recognizable Coke.

Comment by daveguy 18 hours ago

The recipe of coke is not a copyright, it is a trade secret. Trade secrets can remain indefinitely if you can keep it secret. Copyrights are "open" by their nature.

Comment by gruez 18 hours ago

In the context of this discussion though, what makes you think openai can't keep theirs a trade secret?

Comment by daveguy 18 hours ago

I was agreeing it could last a very long time, even longer that copyright. But specifically because it is not copyright. But as an AI model, it just won't have value for very long. Models are dated within a 6 months and obsolete in 2 years. IP around development may last longer.

Comment by simianwords 13 hours ago

Why do you see it as zero sum? I don’t care if big tech is accumulating intellectual wealth. I’m getting good products.

Comment by justarandomname 20 hours ago

yeah, but zero chance of that happening unfortunately.

Comment by pear01 20 hours ago

well practiced cynicism is boring.

imo there are actually too few answers for what a better path would even look like.

hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.

Comment by smallmancontrov 20 hours ago

It's interesting that the prosperity maximum of both the United States and China happened at "market economy kept in line with a firm hand" even though we approached it from different directions (left and right respectively) and in the US case reversed course.

We could use another Roosevelt.

Comment by stemlord 20 hours ago

people have been pretty clear about a positive path forward

- big tech should pay for the data they extract and sell back to us

- startups should stop forcing ai features that no one wants down our throats

- the vanguard of ai should be open and accessible to all not locked in the cloud behind paywalls

Comment by FridayoLeary 19 hours ago

But op is frankly absurd. It sounds reasonable for about 1 second before you think about it. What sets tech apart from every other area of human innovation? And why limit it to that? What about mineral exploitation? Oil etc.

It's just not a well thought out comment. If we focus on the "better path forward", the entrance to which is only unlocked by the realisation that big techs achievements (and thus, profits) belong to humanity collectively... After we reach this enlightened state, what does op believe the first couple of things a traveller on this path is likely to encounter (beyond Big Techs money, which incidentally we take loads of already in the form of taxes, just maybe not enough)?

Comment by _DeadFred_ 18 hours ago

Tech is the most set apart area of innovation ever.

First you have tech's ability to scale. The ability to scale also has it creep new changes/behaviors into every aspect of our lives faster than any 'engine for change' could previous.

Tech also inherits, so you can treat it as legos using, what are we at, definitely tens, maybe hundreds of thousands of human years of work, of building blocks to build on top of. Imagine if you started every house with a hundred thousand human years of labor already completed instantly. No other domain in human history accumulates tens of millions of skilled human years annually and allows so much of that work to stack, copy, and propagate at relatively low cost.

And tech's speed of iteration is insane. You can try something, measure it, change it, and redeploy in hours. Unprecedented experimentation on a mass scale leading to quicker evolution.

It's so disingenuous to have tech valuations as high as they are based on these differentiations but at the same time say 'tech is just like everything from the past and must not be treated differently, and it must be assumed outcomes from it are just like historical outcomes'. No it is a completely different beast, and the differences are becoming more pronounced as the above 10Xs over and over.

Comment by greesil 19 hours ago

Well practiced criticism of cynicism is boring

Comment by adolph 6 hours ago

> reaping the benefits of the intellectual wealth accumulated over many thousands of years

Do we not all stand on the shoulders of giants? Will "big next" not take up where "big tech" leaves off one day?

Comment by relaxing 19 hours ago

What should?

Comment by mrwaffle 19 hours ago

Is this technically a form of retroactive mind rape? If so, at least we have the right oligarchic friends experienced in this running the big show. (Apologies if I just any broke rules here).

Comment by mrwaffle 19 hours ago

This seems to be a touchy subject for YC people with 500+ karma. Not a repudiation but an 'invisible hand' downvote to avoid a response or exposure of an opinion. My ancestors fought in the revolutionary war and like them, I'll die on this very subtle rolling hill of a question. I loved you all as brothers, this may be the end for mrwaffle.

Comment by FridayoLeary 20 hours ago

Sounds like you just want some of their money.

Comment by triceratops 20 hours ago

Yes, especially since they're talking about wiping about most or all white-collar jobs in our lifetimes. What's wrong with that?

Comment by FridayoLeary 19 hours ago

Why drag your dead ancestors into the debate?

On that note they say oil is dead dinosaurs, maybe have a word with Saudi Arabia...

Comment by dekhn 19 hours ago

Oil comes from algae (and other tiny marine organisms) not dinosaurs.

Comment by triceratops 19 hours ago

Was this reply intended for a different comment? Or do I need more sleep?

Comment by blactuary 19 hours ago

If they want to abandon noblesse oblige we can certainly go back to the old way of evening things out. Their choice

Comment by mackeye 19 hours ago

some would say their money is our money via the ltv :-)

Comment by 19 hours ago

Comment by jaybyrd 21 hours ago

guys were just trying to take jobs away from you.... please stop being mean to us - richest people on earth 2026

Comment by donkey_brains 19 hours ago

Today a manager at my work asked all his teams including mine “please write up a report on how many engineers from your teams we could replace with AI”.

Surprisingly, the answer he got was “none, because that’s not how AI works”.

Guess we’ll see if that registers…

Comment by MobiusHorizons 19 hours ago

I would love to have responded something like “only one: yours”

But in all seriousness, ai does a pretty good job at impersonating VPs. It’s confidently wrong and full of hope for the future.

Comment by mayhemducks 4 hours ago

You're absolutely right!

Comment by consumer451 18 hours ago

I use various agentic dev tools all day long, mostly with Opus. The tools are very capable now, but when planning mid-complexity features, I find the time estimates hilarious.

Phase 1: 1-2 weeks

Phase 2: 1 week

Phase 3: 2 weeks

8 to 12 hours later, all the work is done and tested.

The funny part to me was that if I had an AI true believer boss, I would report those time estimates directly, and have a lot of time to do other stuff.

Comment by ziml77 18 hours ago

Human time estimates are bad, but the ones that AI gives are just absurd. I've seen them used from small things like planning interviews and short presentations, all the way up to large scale projects. In no case do they make any sense to me. But I think people end up trusting them because they look so confident and well planned due to how the AIs break things down.

Comment by whattheheckheck 18 hours ago

When youre the boss telling kids how to work what time esti.ates will you believe?

Tis the cycle

Comment by sublinear 19 hours ago

All of them because cost cutting is a red flag in business regardless of what year it is.

Comment by GolfPopper 21 hours ago

You forgot... "by stealing from artists and writers at scale".

Comment by jacquesm 21 hours ago

You forgot about 'open source contributors' and 'musicians'.

Comment by dylan604 20 hours ago

these two groups are used to having their stuff stolen way more than the groups GP listed, so in a way kind of appropriate to have been omitted.

Comment by myhf 4 hours ago

and don't forget "subtle backdoors disguised as regular example code"

Comment by soulofmischief 21 hours ago

As an open source contributor and musician who is not rich, I am pretty stoked about the engineering, scientific and mathematical advancements being made in my lifetime.

I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

Comment by overgard 19 hours ago

Well, if you consider Maslow's hierarchy of needs, "creatively enabled" would be a luxury at the top of the pyramid with "self actualization". Luxuries don't matter if the things at the bottom of the pyramid aren't there -- i.e. you can't eat or put a shelter over your head. I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback. Not to mention it's bad business if nobody can afford to use AI because they're unemployed. (I'm not anti-AI, it's an interesting tool, but I think the way it's being developed is inviting a lot of danger for very marginal returns so far)

Comment by jacquesm 19 hours ago

> I think the big AI players really need a coherent plan for this if they don't want a lot of mainstream and eventually legislative pushback.

That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.

> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.

In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.

Comment by soulofmischief 15 hours ago

We were going to have to reckon with these problems eventually as science and technology inevitably progressed. The problem is the world is plunged in chaos at the moment and being faced with a technology that has the potential to completely and rapidly transform society really isn't helping.

Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.

Comment by jacquesm 14 hours ago

> Hatred of the technology itself is misplaced

I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).

Comment by soulofmischief 13 hours ago

I'm extremely concerned about the implications. We are going to have to restructure a lot of things about society and the software we use.

And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.

The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.

Comment by jacquesm 11 hours ago

> We are going to have to restructure a lot of things about society and the software we use.

There is a massive assumption there: that society as such will survive.

Comment by soulofmischief 9 hours ago

Just an unvoiced caveat. It's entirely possible that society won't survive the next century for a growing number of reasons.

Comment by jacquesm 9 hours ago

Indeed, so why would we roll the dice on even more of those reasons? We could play it safe for a change.

Comment by soulofmischief 1 hour ago

It's just bad timing, but the ball is already rolling downhill, the cat's already out of the bag, etc. Best we can do at the moment is fight for open research and access.

Comment by soulofmischief 15 hours ago

You can be poor and creative at the same time. Creativity is not a luxury. For many, including myself, it's a means of survival. Creating gives me purpose and connection to the world around me.

I grew up very poor and was homeless as a teenager and in my early 20s. I still studied and practiced engineering and machine learning then, I still made art, and I do it now. The fact that Big Tech is the new Big Oil is besides the point. Plenty of companies are using open training sets and producing open, permissively licensed models.

Comment by johnnyanmac 20 hours ago

> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

I'm not really a fan of the "you criticize society yet you participate in it" argument.

>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.

We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Comment by Wyverald 19 hours ago

>> I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.

> I'm not really a fan of the "you criticize society yet you participate in it" argument.

It seems to me that GP is merely recognizing the parts of technological advance that they do find enjoyable. That's rather far from the "I am very intelligent" comic you're referencing.

> The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

Just noting that GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

Comment by johnnyanmac 19 hours ago

>GP is merely recognizing the parts of technological advance that they do find enjoyable.

Machine fabrication is nice. Machine fabrication from sweatshop children in another country is not enjoyable. That's the exact nuance missing from their comment.

>GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.

I'd hope we'd understand since 2024 that we're in an attention society, and this is a very common tactic used to disenfranchise people from engaging in action against what they find unfair. Enforcing a feeling of inevitability is but one of many methods.

Intentionally or not, language like this does impede the efforts.

Comment by soulofmischief 15 hours ago

> I'm not really a fan of the "you criticize society yet you participate in it" argument.

Me neither, and I didn't make such an argument.

> You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.

What does that have to do with my argument? What about my argument suggested ignorance of this fact? This is just another straw man.

> We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.

What an incredible characterization. Nothing about my argument is "laying down", perhaps it seems that way because you do not share my ideals, but I fight for my ideals, I debate them in public as I do now, and that is the furthest thing from "laying down" and "not fighting myself". You seem to be projecting several assumptions about my politics and historical knowledge. Did you have a point to make or was this just a bunch of wanking?

Comment by johnnyanmac 12 hours ago

>Did you have a point to make or was this just a bunch of wanking?

The way you "debate" is exactly a patt of the problem. You come in deflecting and attacking instead of understanding and clarifying.

You didn't even give me a point to respond to that doesn't derail go way off topic. If you can't see it then I hope someone with more patience can help you out. But no point having a conversation with someone who approaches it like this (against site guidelines Good luck out there.

Comment by soulofmischief 9 hours ago

1. I never asked to "debate" with you.

2. You decided to leave a comment on one of my comments, except it contained multiple straw man arguments and a gross mischaracterization, arguing in bad faith from the start.

3. I pointed out the issues with your comment, and asked if you had an actual point to make, or if it was just straw man arguments. I had no intention of continuing a discussion with you and do not need to provide you "points to respond to". I agree, when I have to stop and explain why your argument has issues, it derails the thread. That's the problem with straw man arguments.

4. It is your problem if you take issue with criticism and think it means I need to be "helped out" by "someone with more patience". That's quite a condescending response.

I can see that you didn't have a point to make, and instead elected to just take a bunch of negative jabs at me as retaliation for pointing out the errors in your argument.

So yes, let's end the conversation. It's quite ridiculous.

Comment by nozzlegear 20 hours ago

As an open source maintainer, I'm not stoked and I feel pretty much the opposite way. I've only become more annoyed when trying to adopt these tools, and felt more creative and more enabled by reducing their usage and going back to writing code by hand the old fashioned way. AI's only been useful to me as a commit message writer and a rubber duck.

> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.

This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.

Comment by soulofmischief 15 hours ago

There is a way for us to both get what we want out of software development without ideologically crusading against each other's ideals. We can each have these valid opinions about how generative technology personally integrates into our lives.

Of course, that might be less and less true about our work as time goes on. At some point in the future, hiring an engineer who refuses to use generative coding tools will be the equivalent of hiring someone today who refuses to use an IDE or even a tricked out emacs/vim and just programs everything in Notepad. That's cool if they enjoy it, but it's unproductive in an increasingly competitive industry.

Comment by nozzlegear 14 hours ago

Perhaps so, but again I find your vision of the future overly optimistic. Luckily I'm self employed and don't have to worry about AI usage quotas and "being unproductive" in an increasingly unproductive and non-deterministic industry.

Comment by callc 20 hours ago

You can say the same thing as we invented the atomic bomb.

Cool science and engineering, no doubt.

Not paying any attention to societal effects is not cool.

Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.

Comment by peyton 20 hours ago

I dunno, I take a more McLuhan-esque view. We’re not here to save the world every single time repeatedly.

Comment by soulofmischief 15 hours ago

I do say the same thing about the bomb. It was very cool science and engineering. I've studied many of the scientists behind the Manhattan Project, and the work that got us there.

That doesn't mean I also must condone our use of the bomb, or condone US imperialism. I recognize the inevitability of atomic science; unless you halt all scientific progress forever under threat of violence, it is inevitable that a society will have to reckon with atomic science and its implications. It's still fascinating, dude. It's literally physics, it's nature, it's humbling and awesome and fearsome and invaluable all at the same time.

> Not paying any attention to societal effects is not cool.

This fails to properly contextualize the historical facts. The Nazis and Soviets were also racing to create an atomic bomb, and the world was in a crisis. Again, this isn't ignorant of US imperialism before, during or after the war and creation of the bomb. But it's important to properly contextualize history.

> Plus, presenting things as inevitabilities is just plain confidently trying to predict the future.

That's like trying to admonish someone for watching the Wright Brothers continually iterate on aviation, witnessing prototype heavier-than-air aircraft flying, and suggesting that one day flight will be an inevitable part of society.

The steady march of automation is an inevitability my friend, it's a universal fact stemming from entropy, and it's a fallacy to assume that anything presented as an inevitability is automatically a bad prediction. You can make claims about the limits of technology, but even if today's frontier models stop improving, we've already crossed a threshold.

> Anyone who says they do is a liar.

That's like calling me a liar for claiming that the sun will rise tomorrow. You're right; maybe it won't! Of course, we will have much, much bigger problems at that point. But any rational person would take my bet.

Comment by blibble 20 hours ago

> I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies

I'd rather be dead than a cortex reaver[1]

(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)

[1]: https://www.youtube.com/watch?v=1egtkzqZ_XA

Comment by TheDong 19 hours ago

You're saying "musicians" aren't "artists", and "open source contributors" aren't artists _or_ writers? Artists covers both of the groups you said.

Comment by jacquesm 19 hours ago

Yes, we're all artists. Good now?

Comment by Mars008 18 hours ago

The picture will be incomplete if we don't mention that those 'artists and writers' are using the results at scale.

Comment by malfist 21 hours ago

Techbros trying to replace wage theft as the largest $ crime in the US

Comment by tsunamifury 19 hours ago

Something something… great artist steal.

Comment by jaybyrd 21 hours ago

well if all the talent is stolen and put into our water destruction machine we can make significantly worse and more expensive versions of just giving the job to a wagey

Comment by logicprog 19 hours ago

Comment by goalieca 19 hours ago

That article was clearly AI generated. I read pages of it and still didn’t see any actual data. Just different phrasing’s of that claim.

Comment by logicprog 18 hours ago

What are you talking about? He goes into plenty of data, domain relevant definitions, specific cases, etc? Links to reliable sources for every numbers claim, of which there's several per paragraph, shows graphs, pictures, and does a lot of math (all of which I manually checked myself on paper as I went through). Also, the writing style is very much not ChatGPT-like, especially with all of the very honest corrections and edits he's added over time, which an AI slop purveyor wouldn't do.

The deep analysis starts at this section: https://andymasley.substack.com/p/the-ai-water-issue-is-fake...

You can't just dismiss anything you don't like as AI.

Comment by 8 hours ago

Comment by 21 hours ago

Comment by pesus 21 hours ago

On one hand, we're actively destroying society, but on the other, billionaires are getting richer! Why are you mad at us!?

Comment by Sharlin 20 hours ago

Sonething something for a brief moment we created a lot of value for the shareholders

Comment by Joel_Mckay 18 hours ago

With -$4.50 revenue per new customer, these gamblers are demonstrably creating an externalized debt for society when it inevitably implodes the market.

Some are projecting >35% drop in the entire index when reality hits the "magnificent" 7. Look at the price of Gold, corporate cash flows, and the US Bonds laggard performance. That isn't normal by any definition. =3

Comment by snowwrestler 19 hours ago

> As someone who desperately needs this technology to work out, I can honestly say it is the most essential tool ever created in all of human history.

For those having trouble finding the humor, it lies in the vast gulf between grand assertions that LLMs will fundamentally transform every aspect of human life, and plaintive requests to stop saying mean things about it.

As a contrast: truly successful products obviate complaints. Success speaks for itself. In TV, software, e-commerce, statins, ED pills, modern smartphones, social media, etc… winning products went into the black quickly and made their companies shitloads of money (profits). No need to adjust vibes, they could just flip everyone the bird from atop their mountains of cash. (Which can also be pretty funny.)

There are mountains of cash in LLMs today too, but so far they’re mostly on the investment side of the ledger. And industry-wide nervousness about that is pretty easy to discern. Like the loud guy with a nervous smile and a drop of sweat on his brow.

https://youtu.be/wni4_n-Cmj4

So much of the current discourse around AI is the tech-builders begging the rest of the world to find a commercially valuable application. Like the AgentForce commercials that have to stoop to showing Matthew McConaughey suffering the stupidest problems imaginable. Or the OpenAI CFO saying maybe they’ll make money by taking a cut of valuable things their customers come up with. “Maybe someone else will change the world with this, if you’ll all just chill out” is a funny thing to say repeatedly while also asking for $billions and regulatory forbearance.

Comment by datsci_est_2015 18 hours ago

> As a contrast: truly successful products obviate complaints. Success speaks for itself.

Makes me consider: Dotcom domains, Bitcoin, Blockchain, NFTs, the metaverse, generative AI…

Varying degrees of utility. But the common thread is people absolutely begging you to buy in, preying on FOMO.

Comment by 19 hours ago

Comment by twoodfin 18 hours ago

Or maybe McSweeney’s hasn’t been consistently funny for years and years?

Comment by snowwrestler 18 hours ago

McSweeney’s was never consistently funny. This is a good piece though.

Comment by Gene5ive 20 hours ago

Up Next: A McSweeney's article where McSweeney's takes the debates about it on Hacker News as seriously as Hacker News takes McSweeney's: way too much

Comment by selimthegrim 18 hours ago

This has the potential to be another /g/ ITT we HN now

Comment by i_love_retros 19 hours ago

Today I asked copilot agent a question about a selector in a cypress test and it requested to run a python command in my terminal.

Comment by Brajeshwar 18 hours ago

We, humans, will read this and laugh, chuckle, but the AI Overloads will not understand that. This will be added to the training data and become a truth. But what if that is?

Comment by gradus_ad 19 hours ago

Jensen needs to keep escalating the hype to keep the hoarding dynamics in play. Because that's what's selling GPU's. You can't look at voracious GPU demand as a real signal of AI app profitability or general demand. It's a function of global tech oligarchs with gargantuan cash hoards not wanting to be left behind. But hoarding dynamics are nonlinear through self reinforcment and the moment any hint of limitations of current gen AI crop up spend will collapse.

Comment by ares623 11 hours ago

None of them can stop. None of them can blink. They must keep going.

If Jensen even as much as _plans_ for something other than AI, it will cause everyone else to doubt.

Comment by stego-tech 19 hours ago

Excellent satire, absolutely something I could see in The Onion or Hard Drive as an Op-Ed.

Comment by olivierestsage 19 hours ago

Powerful catharsis in this

Comment by hedayet 18 hours ago

The same people selling you AI today (AGI tomorrow) were the ones selling remote work yesterday. Then "mandated" everyone back to the office.

Oh, and most of them had a crypto bag too.

<sigh>

Comment by Joel_Mckay 18 hours ago

Most cons can't create actual value, and inevitably must continue to con to survive. It would be called recidivism if they went to prison. =3

Comment by 18 hours ago

Comment by twochillin 19 hours ago

fully expected this to be about nadella

Comment by willturman 18 hours ago

It is.

Comment by porkloin 21 hours ago

I hate LLMs as much as the next guy, but this was honestly just not very funny. Humor can be a great vehicle for criticism when it's done right, but this feels like clickbait-level lazy writing. I wouldn't criticize it anywhere else, but I have enjoyed reading a bunch of actually good writing from mcsweeney's over the years in the actual literary journal and on their website.

Comment by Froztnova 21 hours ago

It's that brand of humor that isn't really humor anymore because the person writing it is clearly positively seething behind the keyboard and considers the whole affair to be deadly serious.

I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.

Comment by porkloin 21 hours ago

For me I guess I don't really see what it's adding. You can watch an actual video clip of Jensen begging people not to "bully" or say "hurtful" things about AI while wearing a stupid leather jacket. It's a million times funnier to watch him squirm in real life.

I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.

Comment by Brian_K_White 20 hours ago

[flagged]

Comment by ares623 15 hours ago

You’re not the target audience then. It’s for those who can’t shake the feeling that something doesn’t feel quite right about the whole thing.

Comment by madeofpalk 21 hours ago

I think you just don’t like McSweeney’s style.

Comment by 20 hours ago

Comment by johnnyanmac 20 hours ago

Like it or not, we're in an attention economy. We've seen that if we aren't loud and brash about it that the adminsitration will happily be loud (and sometimes lie) to push their narrative.

Maybe if we ever return to normal times and also don't let the other 90% of corruption stay where it's been for the past 40 years we can start to ease off the noise.

Comment by jaybyrd 21 hours ago

i think its a little on the nose but overall def worth reading and funny enough for a chuckle in my opinion

Comment by heliumtera 21 hours ago

Agreed, it's almost non satire given how cynical it is. I loved it.

Comment by b00ty4breakfast 21 hours ago

[flagged]

Comment by vivzkestrel 18 hours ago

- can we please get an article like this dedicated to windows 11?

Comment by 18 hours ago

Comment by rednafi 20 hours ago

"Oh, it's another tool in your repertoire like Bash" doesn't garner billions of dollars in investment. So they have to address it as the next electricity or the internet, when in its current form, it's much closer to a crypto grift than it is to electricity.

Comment by random_duck 21 hours ago

Is this a sign that us of the plebs are starting to grow discontent?

Comment by blibble 21 hours ago

it's certainly a change from the "inevitability" vomit the boosters were emitting this time last year

Comment by techblueberry 18 hours ago

Oh, I mean they’re still doing that too:

https://www.darioamodei.com/essay/the-adolescence-of-technol...

Comment by blibble 18 hours ago

oh dear

the whole thing reads as "it's going to be so powerful! give money now!"

Comment by heliumtera 21 hours ago

Starting? Society minus those who struggled with css is fully fatigued of AI.

Comment by lifetimerubyist 19 hours ago

Gotta go back to shoving these nerds into lockers.

Comment by akomtu 19 hours ago

AI is alien intelligence, really. If biotech created an unusual mold that responds to electric impulses the way LLMs do, we would rightfully declare that this mold has some sort of intelligence and for this reason it is, technically speaking, an alien lifeform. AI is just that intelligent mold, but based on transistors instead of organic cells. Needless to say, it's a bad idea to create a competing lifeform that's smarter than us, regardless of whatever flimsy benefits it might have.

Comment by 20260126032624 18 hours ago

Hey, I just wanted to say, big fan of your work on vixra.org

Comment by Joel_Mckay 17 hours ago

LLM is not real AI, would take 75% of our galaxy energy to reach human level error rates, and is economically a fiction.... but it doesn't have to be "AGI" to cause real harm.

https://en.wikipedia.org/wiki/Competitive_exclusion_principl...

The damage is already clear =3

https://www.youtube.com/watch?v=TYNHYIX11Pc

https://www.youtube.com/watch?v=yftBiNu0ZNU

https://www.youtube.com/watch?v=t-8TDOFqkQA

Comment by kindawinda 18 hours ago

dumbass article

Comment by theLegionWithin 21 hours ago

nice satire

Comment by notepad0x90 17 hours ago

> . Yes, it’s expanding the surveillance state, and yes, it’s destroying the education system, and yes, it’s being trained on copyrighted work without permission, and yes, it’s being used to create lethal autonomous weapons systems that can identify, target, and kill without human input, but… I forget my point, but ultimately, I think you should embrace it.

What's your answer to this? How did it turn out for nuclear energy? If it wasn't for this sort of thinking we'd have nuclear power all over the world and climate issues would not have been as bad.

You should embrace it, because other countries will and yours will be left behind if you don't. That doesn't mean put up with "slop", but that also doesn't mean be hostile to anything labeled "AI" either. The tech is real, it is extremely valuable (I applaud your mental gymnastics if you think otherwise), but not as valuable as these CEOs want it to be or in the way they want to be.

On one hand you have clueless executives and randos trying to slap "AI" on everything and creating a mess. On the other extreme you have people who reject things just because it has auto-complete (LLMs :) ) as one of it's features. You're both wrong.

What Jensen Huang and other CEOs like Satya Nadella are saying about this mindless bandwagonning of "oh no, AI slop!!!" b.s. is true, but I think even they are too caught up in tech circles? Regular people don't to the most part feel this way, they only care about what the tool can do, not how it's doing it to the most part. But..people in tech largely influence how regular people are educated, informed,etc...

Look at the internet, how many "slop" sites were there early on? how much did it get dismissed because "all the internet is good for is <slop>"?

Forget everything else, just having an actual program.. that I can use for free/cheap.. on my computer.. that can do natural language processing well!!! that's insane!! Even in some of the sci-fi I've been rewatching in recent years, the "AI/Computer" in spaceships or whatever is nowhere near as good as chatgpt is today in terms of understanding what humans are saying.

I'm just calling for a bit of a perspective on things? Some are too close to things and looking under the hood too much, others are too far and looking at it from a distance. The AI stock valuation is of course ridiculous, as is the overhyped investments in this area, and the datacenter buildout madness. And like I said, there are tons of terrible attempts at using this tech (including windows copilot), but the extremes of hostility against AI I'm seeing is also concerning, and not because I care about this awesome tech (which I do), but you know.. the job market is rough and everything is already crappy.. I don't want to go through an AI market crash or whatever on top of other things, so I would really appreciate it on a personal level if the cause of any AI crash is meritocratic instead of hype and bandwagonning, that's all.

Comment by ares623 16 hours ago

I wasn’t around at the time to argue against nuclear energy.

I wasn’t old enough to argue against the internet. Plus to be fair to the ones who were, there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.

I wasn’t old enough to argue against social media and the surveillance it brought.

Now AI comes along. And I am old enough. And I am experienced enough in a similar space. And I have seen what similar technology have done and brought. And I have taken all that and my conscience and instinct tells me that AI is not a net good.

Previous generations have failed us. But we make do with the world we find ourselves born into.

I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.

All so they can open 20 PRs per day for their employers.

Comment by notepad0x90 8 hours ago

Whether you were around or not is irrelevant, I wasn't around for some of that either. I brought it up so we can learn from the past instead of repeat those mistakes.

> , there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.

Same with LLMs.

> AI is not a net good.

You're falling into the same trap as previous generations when you do that. You won't actually end up fixing or improving the negative impacts of AI and your country/society will lose out big time in all sorts of ways.

Tech doesn't make things bad, people do to the most part. Where AI is abused, it needs legislation, not resistance and you should know it is a LOT more nuanced than that. How is an LLM language translator for tourists being loped in the same bucket as LLMs being used to target people for assasination? Your lack of nuance is laziness, no political or ideological stand can justify that laziness.

Nuclear energy had a lot of negatives, and people made the same types of arguments and outright banned it, next time you complain about climate change consider that your way of thinking might be part of the problem. Right now datacenter build outs are contributing to water scarcity for example, so instead of doing the hard and nuanced work of actually regulating and fixing that you oppose AI entirely. You do the easy thing, in the end we live in the real world and supply/demand economics rules, so your resistance is only performative at best, or catastrophic to the economy at worst. The latter part isn't for billionaries, and it isn't just for job markets, when the economy goes, all the climate change talk goes with it, all the EVs, green energy initiatives,etc... go, wars and crises increase, disease outbreaks increase. Doing the easy thing leads to this is my point, not that you need AI to prevent those.

> I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.

100x it would! although whose children depends on who regulates it first. My bet is China and the EU will regulate the crap out of it and extract the most value for themselves and future generations. AI is just solution, a tool, it isn't magic as you very well know. Companies have being using ML for surveillance for a long time. The US gov was using life of pattern analysis ML ten years ago to pick out assassination targets in Afghanistan and Pakistan. You have a fundamental lack of laws and a broken system of governance, don't take that out on tech.

Comment by irishcoffee 21 hours ago

It is highly amusing to me that the same ~2,000 people who have the most to gain from LLM success also largely control the media narratives and the vast majority of the global economy.

Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.

“Dumb fucks”

Comment by lovich 20 hours ago

The Luddites weren’t anti technological progress, they were anti losing their job and entire way of life with an impolite “get fucked you fucking peasant” message to boot.

I wonder what name the tech bros will come up to call us for the same feeling nowadays.

Comment by yoyohello13 18 hours ago

They don’t need a new name. They just keep using Luddite.

Comment by khana 21 hours ago

[dead]

Comment by trhway 21 hours ago

[flagged]

Comment by zahlman 20 hours ago

McSweeney's is a well known Internet satire site that has been in operation for decades; while there are multiple contributors, the style here seems fairly standard for the site, the author has a submission history going back to at least 2020 and I see no LLM cliches. Suspecting AI here makes about as much sense to me as suspecting it on an arbitrarily selected LWN article.

Comment by kshri24 20 hours ago

> just use my evil technology

Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.

For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.

Just do enough good that it dwarfs the evil uses of this awesome technology.

Comment by budududuroiu 20 hours ago

Well, at this moment, the evil things done with technology vastly surpass the good things done with technology.

Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.

Comment by pixl97 19 hours ago

Centralization of technology has been happening at a rapid pace, and is only a tiny bit the fault of technology itself.

Capitalism demands profits. Competition is bad for profits. Multiple factories are bad for profits. Multiple standards are bad for profits. Expensive workers are bad for profits.

Comment by mrnaught 19 hours ago

“Just do enough good...”, it is hard to define what is "good". This tech has many dimensions and second-order effects, yet all the tech giants claim it a “net positive” without understanding fully what is unfolding.

Comment by wk_end 20 hours ago

> It is just math at the end of the day.

Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...

Anyway, it's just kind of a meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.

Comment by johnnyanmac 20 hours ago

>Nothing either good nor bad but thinking makes it so - Shakespeare

That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.

>Just do enough good that it dwarfs the evil uses of this awesome technology.

The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.

If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).

Comment by pixl97 19 hours ago

Even this has little to do with AI and points right at the capitalist society that already exists. HN really doesn't like to talk about their golden child that let's money flow, but the concentration of wealth and IP by the super wealthy occurred before GenAI was a thing.

This also ignores the broken fucking copyright system that ensures once you create something you get many lifetimes of fucking off without having to work, so if genAI kills that I won't shed a tear.

Comment by robinhoode 19 hours ago

If we lived in a sane society, AI would actually be used for good.

AI is literally trained on by humans, used by humans. If humans are doing awful things with it, then it's because humans are awful right now.

I strongly feel this is related to the rise of fascism and wealth inequality.

We need a great conflict like WW2 to release this tension.

Comment by gip 20 hours ago

> "immoral technofascist life"

Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.

Comment by tdb7893 20 hours ago

Is there a reason you seem to view conscience and confronting facts as seemingly opposed things? Also it seems to me like morality and conscience seem important to argue about, with facts just being part of that argument.

Comment by SpicyLemonZest 20 hours ago

I think that someone interested in discussing facts would not write the phrase "immoral technofascist life". If I took the discussion at face value, I might respond asking for examples of how e.g. Dario Amodei is a "technofascist", but I think we can agree that would be really obtuse of me.

Comment by tdb7893 18 hours ago

Haha, my experience is people making those sorts of pronouncements will argue literally anything so I definitely wouldn't assume they are uninterested in arguing facts. Though I agree though that arguing with some people is obtuse and you arguing with the original post seems one of those cases.

More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).

Comment by technofastest 19 hours ago

[flagged]

Comment by datsci_est_2015 18 hours ago

No see “facts” are what I use to support my worldview, and what you’ve supplied are arguments, and I can discard your arguments through debate, especially because I believe that they’re founded on your feelings (like a silly “conscience”).

Comment by datsci_est_2015 9 hours ago

/s if it wasn’t obvious.

When I see the word “facts” used like this, I feel there’s a parallel to the way the word “respect” is used abusively, as outlined in this Tumblr post that has stuck with me for years:

https://soycrates.tumblr.com/post/115633137923/stimmyabby-so...

> Sometimes people use “respect” to mean “treating someone like a person” and sometimes they use “respect” to mean “treating someone like an authority”

> and sometimes people who are used to being treated like an authority say “if you won’t respect me I won’t respect you” and they mean “if you won’t treat me like an authority I won’t treat you like a person”

> and they think they’re being fair but they aren’t, and it’s not okay.

The word “facts” can be used abusively, as in “My facts prove my worldview, your “facts” are arguments based on emotion.”

Comment by socialcommenter 17 hours ago

It's much easier for someone who blurs the facts to keep a clear conscience because they don't have to acknowledge (to themselves) what they've done.

Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.

I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.

Comment by johnnyanmac 20 hours ago

> instead of confronting facts and reality.

okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?

Comment by mattgreenrocks 19 hours ago

It’s wild to me that we both see people like Jensen as great while also tolerating public whining of the sort in the linked article. Don’t get me wrong, there are people who are far worse! But why do we put up with a billionaire whining that people are critical of what they make? At that scale it is guaranteed to have haters. It’s just statistics, man.

Comment by daft_pink 21 hours ago

Maybe he shouldn’t have claimed if we could get in a moving vehicle with his ai driving no problem

Comment by Lerc 21 hours ago

Perhaps things would work out better if people didn't say mean things regardless of who it's about.

You can still criticise without being mean.

Comment by donkey_brains 20 hours ago

Woosh

Comment by thinkingtoilet 21 hours ago

Explain how to nicely criticize computer software that allows for the generation of sexually explicit images of children.

Comment by Lerc 20 hours ago

I'm not sure what you are wanting here, are you actually requiring me to be a bully to affect change?

I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.

I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.

I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.

Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.

Even though I think some books can be harmful, I do not propose attacking people who make printing presses.

What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?

If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.

Comment by chasd00 19 hours ago

Photoshop has been around for a long time.

Comment by paodealho 18 hours ago

And canvases and paint have existed for even longer, but it needs someone skilled to make use of it.

Stable Diffusion enabled the average lazy depraved person to create these images with zero effort, and there's a lot of these people in the world apparently.

Comment by bigstrat2003 18 hours ago

So? At the end of the day, regardless of how skilled one has to be to use it, a tool is not considered morally responsible for how it is used. Nor is the maker of that tool considered morally responsible for how it is used, except in the rare case where the tool only has immoral uses. And that isn't the case here.

Comment by 21 hours ago