Please don't say mean things about the AI I just invested a billion dollars in
Posted by randycupertino 22 hours ago
Comments
Comment by seizethecheese 21 hours ago
The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.
I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.
Comment by muvlon 21 hours ago
And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
Comment by pixl97 20 hours ago
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.
Comment by ethbr1 17 hours ago
You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).
Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.
Comment by mogsor 11 hours ago
Comment by vrighter 4 hours ago
Comment by pixl97 6 hours ago
>Grok doesn't generate nude pictures of women because it wants to,
I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.
What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.
It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.
How many things do you do because society programmed you that way, and you're unable to think outside that programming?
Comment by n8cpdx 19 hours ago
Comment by abustamam 15 hours ago
Comment by apublicfrog 2 hours ago
Comment by sunaookami 12 hours ago
Comment by mogsor 11 hours ago
Comment by collingreen 4 hours ago
Am I misunderstanding you or are you somehow saying anything done in the past is fine to do more of?
Comment by yifanl 2 hours ago
Comment by notanastronaut 6 hours ago
When I think of the internet, I think of malware, porn, social media manipulating people, flame wars, "influencers", and more.
It is also used to scam the elderly, sharing photoshopped sexually explicit pictures of men, women, and children, without their consent, stealing all kinds of copyrighted material, and definitely sucking the joy out of everything. Revenge porn wasn't started in 2023 with OpenAI. And just look at META's current case about Instagram being addicting and harmful to children. If "AI" is a tech carcinogen, then the internet is a nuclear reactor, spewing radioactive material every which way. But hey, it keeps the lights on! Clearly, a net positive.
Let's just be intellectually consistent, that's all I'm saying.
Comment by taurath 20 hours ago
Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.
Comment by panda-giddiness 11 hours ago
Regardless, LLMs are already being abused to mass produce spam, and some of that spam has almost certainly been employed to separate the elderly from their savings, so there's nothing particularly implausible about the satirical product, either.
Comment by tim333 13 hours ago
>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436
and he says a "end of the world narrative science fiction narrative" is hurtful.
Comment by ajkjk 20 hours ago
Comment by jychang 20 hours ago
Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.
Comment by pluralmonad 20 hours ago
Comment by ajkjk 16 hours ago
Comment by wizardforhire 20 hours ago
[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...
[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...
Comment by irishcoffee 15 hours ago
Comment by blibble 19 hours ago
(also: what city? for a friend...)
Comment by anonymars 20 hours ago
What is it that isn't being done here, and who isn't doing it?
Comment by ajkjk 16 hours ago
(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)
Comment by rgmerk 20 hours ago
Comment by drzaiusx11 20 hours ago
If it's not happening yet, it will...
Comment by evandrofisico 20 hours ago
Comment by bandrami 19 hours ago
Comment by tremon 6 hours ago
Comment by ryan_lane 21 hours ago
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
Comment by seizethecheese 21 hours ago
Comment by only-one1701 20 hours ago
Comment by the_snooze 20 hours ago
Comment by jacquesm 20 hours ago
Comment by johnnyanmac 4 hours ago
That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.
Comment by rcxdude 20 hours ago
Comment by irjustin 20 hours ago
This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.
Comment by solid_fuel 20 hours ago
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Comment by pixl97 20 hours ago
Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.
To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.
Comment by paulryanrogers 19 hours ago
Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.
Comment by johnnyanmac 4 hours ago
Yes, and now porn is highly regulated. Maybe that's a hint?
Comment by christianqchung 20 hours ago
Comment by jacquesm 20 hours ago
Comment by only-one1701 20 hours ago
Comment by NicuCalcea 20 hours ago
Comment by rgmerk 19 hours ago
Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.
Comment by wk_end 20 hours ago
Comment by username223 19 hours ago
Comment by burnto 20 hours ago
Comment by criley2 21 hours ago
Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.
Therefore, phones are bad?
This is of course before we talk about what criminals do with money, making money truly evil.
Comment by only-one1701 20 hours ago
Without Generative AI, we couldn’t…?
Comment by simianwords 13 hours ago
Comment by shepherdjerred 20 hours ago
Comment by Larrikin 20 hours ago
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
Comment by shepherdjerred 5 hours ago
Comment by simianwords 13 hours ago
Comment by jamiek88 20 hours ago
Comment by pixl97 19 hours ago
People seemingly have some very odd views on products when it comes to AI.
Comment by freejazz 19 hours ago
How obtuse. The poster is saying they don't enable anything of value.
Comment by queenkjuul 20 hours ago
Comment by solid_fuel 20 hours ago
Comment by pixl97 19 hours ago
This line of thinking is ridiculous.
Comment by Larrikin 14 hours ago
The phone let's you talk to someone you couldn't before when shouting can't.
ChatGPT let's you...
Please complete the sentence without an analogy
Comment by pixl97 6 hours ago
It does not. You could still till the land with hand tools. You just get a lot more done.
ChatGPT let's me program in languages I was not efficient in before.
Anyway, I'm done with your technology purity contest, it has about zero basis in reality.
Comment by Larrikin 5 hours ago
Comment by pixl97 5 hours ago
Comment by simianwords 13 hours ago
The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.
Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?
Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.
It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.
Comment by mcv 13 hours ago
AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.
I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.
Comment by pixl97 6 hours ago
And that is slop work is always easier and cheaper than doing something right. We can make perfectly good products as it is, yet we find Shien and Temu filled with crap. That's not related to AI. Humans drown themselves in trash whenever we gain the technological capability to do so.
To put this another way, you cannot get a 10x speed up in high quality work without also getting a 1000x speed up in low quality work. We'll pretty much have to kill any further technological advancement if that's a showstopper for you.
Comment by criley2 10 hours ago
They spoke slowly, through letters, until phones sped it up.
We coded slowly, letter by letter, until agents sped it up.
Comment by JumpCrisscross 21 hours ago
Phones are utilities. AI companies are not.
Comment by mrnaught 19 hours ago
I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.
Comment by lostmsu 4 hours ago
Comment by solid_fuel 21 hours ago
After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:
- advertising
- astroturfing
- other forms of botting
- scamming old people out of their money
Comment by ajross 21 hours ago
True, but no more true than it is if you replace the antecedent with "people".
Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.
Comment by solid_fuel 20 hours ago
Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.
Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]
Comment by TheOtherHobbes 20 hours ago
Comment by CamperBob2 20 hours ago
Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.
Comment by blibble 19 hours ago
it's not a person, it doesn't hallucinate or have imagination
it's simply unreliable software, riddled with bugs
Comment by CamperBob2 16 hours ago
Comment by fao_ 20 hours ago
It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.
Comment by ajross 20 hours ago
And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?
Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.
Comment by TheOtherHobbes 20 hours ago
But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.
The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.
Comment by echelon 20 hours ago
As a filmmaker, my friends and I are getting more and more done as well:
https://www.youtube.com/watch?v=tAAiiKteM-U
https://www.youtube.com/watch?v=oqoCWdOwr2U
As long as humans are driving, I see AI as an exoskeleton for productivity:
https://github.com/storytold/artcraft (this is what I'm making)
It's been tremendously useful for me, and I've never been so excited about the future. The 2010's and 2020's of cellphone incrementalism and social media platformization of the web was depressing. These models and techniques are actually amazing, and you can apply these techniques to so many problems.
I genuinely want robots. I want my internet to be filtered by an agent that works for me. I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.
Apart from all the other madness in the world, this is the one thing that has been a dream come true.
As long as these systems aren't owned by massive monopolies, we can disrupt the large companies of the world and make our own place. No more nepotism in Hollywood, no more working as a cog in the labyrinth of some SaaS company - you can make your own way.
There's financial capital and there's labor capital. AI is a force multiplier for labor capital.
Comment by navigate8310 20 hours ago
While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.
Comment by blks 20 hours ago
Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.
Comment by echelon 20 hours ago
There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.
I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.
Here's a really old example of what that looks like (the models are a lot better at this now) :
https://www.youtube.com/watch?v=QYVgNNJP6Vc
There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.
Comment by heliumtera 17 hours ago
Comment by CyberDildonics 1 hour ago
Do you know anything about "Hollywood grade VFX" ? Have you ever worked for any company that does it?
No more nepotism in Hollywood
Do you think "Hollywood VFX" is full of nepotism?
Comment by jacquesm 20 hours ago
Comment by prewett 4 hours ago
[1] https://www.russiabeyond.com/arts/327147-10-best-soviet-bus-...
Comment by echelon 19 hours ago
But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.
Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.
Comment by gllmariuty 20 hours ago
for an 2011 account that's a shockingly naive take
yes, AI is a labor capital multiplier. and the multiplicand is zero
hint: soon you'll be competing not with humans without AI, but with AIs using AIs
Comment by Terr_ 19 hours ago
"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"
Comment by queenkjuul 20 hours ago
It's ostensibly doing things you asked it, but in terms dictated by its owner.
Comment by blibble 20 hours ago
and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding
and you're even paying them to replace you
Comment by heliumtera 17 hours ago
Comment by simianwords 13 hours ago
I mean, I think you have not put much thought into your theory.
Comment by awesome_dude 21 hours ago
The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)
Comment by johnnyanmac 4 hours ago
The Trump administration is using AI generated imagery to advance his narrative, and it seems like it's a thing that mostly the elderly would fall for. So yes, there is some truth to it.
In general, the elderly will always be more vulnerable to technological exploitation.
Comment by thefz 14 hours ago
But did it accelerate the whole process? Hell yeah.
Comment by gosub100 21 hours ago
Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.
Comment by JumpCrisscross 20 hours ago
And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.
Comment by techblueberry 18 hours ago
Yes. Yes it does. That is the satire.
Comment by weebull 12 hours ago
Comment by wat10000 19 hours ago
Before this we had "the internet is for porn." Same sort of exaggerated statement.
Comment by ryanobjc 21 hours ago
Comment by internet101010 20 hours ago
Comment by popalchemist 17 hours ago
If you aren't familiar, look into it.
Comment by GoodJokes 16 hours ago
Comment by some_furry 21 hours ago
Comment by ameliaquining 21 hours ago
Comment by thegrim000 21 hours ago
Comment by seizethecheese 21 hours ago
To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.
Comment by Brian_K_White 20 hours ago
Comment by seizethecheese 19 hours ago
Comment by Barrin92 20 hours ago
Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it
I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem
Comment by shimman 21 hours ago
Comment by some_furry 21 hours ago
Comment by gllmariuty 21 hours ago
Comment by Retric 21 hours ago
The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.
Comment by seizethecheese 21 hours ago
Comment by rootnod3 21 hours ago
Comment by queenkjuul 20 hours ago
Comment by vitajex 18 hours ago
In order to be funny at least!
Comment by quantum_state 20 hours ago
Comment by mediaman 19 hours ago
Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.
There doesn't appear to be any moat.
This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.
NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.
Comment by TheColorYellow 19 hours ago
When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).
There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.
Comment by cj 18 hours ago
If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.
Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.
Comment by tru3_power 18 hours ago
Comment by gizmodo59 19 hours ago
What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.
Comment by dpc050505 5 hours ago
Think of all the scientific experiments we could've had with the hundreds of billions being spent on AI. We need a lot more data on what's happening in space, in the sea, in tiny bits of matter, inside the earth. We need billions of people to learn a lot more things and think hard based on those axioms and the data we could gather exploring what I mention above to discover new ones. I hypothesize that investing there would have more benefit than a bunch of companies buying server farms to predict text.
CERN cost about 6 billions. Total MIT operations cost 4.7 billions a year. We could be allocating capital a lot more efficiently.
Comment by CrossVR 18 hours ago
Comment by mikestorrent 18 hours ago
What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.
Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.
Comment by digiown 18 hours ago
Comment by charcircuit 18 hours ago
Comment by digiown 6 hours ago
https://www.bloomberg.com/news/articles/2025-11-10/data-cent...
Comment by jredwards 5 hours ago
Comment by charcircuit 19 hours ago
They are investing 10s of billions.
Comment by bigstrat2003 18 hours ago
Comment by charcircuit 16 hours ago
Comment by bandrami 18 hours ago
Comment by yowlingcat 19 hours ago
Comment by gruez 19 hours ago
What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.
Comment by edoceo 18 hours ago
Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.
Comment by bandrami 18 hours ago
Comment by edoceo 14 hours ago
Comment by compounding_it 18 hours ago
Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.
Comment by mikestorrent 18 hours ago
Comment by fHr 18 hours ago
Comment by ulfw 19 hours ago
As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.
Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user
Comment by derektank 19 hours ago
Comment by gruez 19 hours ago
Comment by edoceo 18 hours ago
https://www.reddit.com/r/CopyCatRecipes/comments/1qbbo6d/coc...
Comment by bandrami 18 hours ago
The recipe also isn't that much of a secret, they read it on the air on a This American Life episode and the Coca Cola spokesperson kind of shrugged it off because you'd have to clone an entire industrial process to turn that recipe into a recognizable Coke.
Comment by daveguy 18 hours ago
Comment by gruez 18 hours ago
Comment by daveguy 18 hours ago
Comment by simianwords 13 hours ago
Comment by justarandomname 20 hours ago
Comment by pear01 20 hours ago
imo there are actually too few answers for what a better path would even look like.
hard to move forward when you don't know where you want to go. answers in the negative are insufficient, as are those that offer little more than nostalgia.
Comment by smallmancontrov 20 hours ago
We could use another Roosevelt.
Comment by stemlord 20 hours ago
- big tech should pay for the data they extract and sell back to us
- startups should stop forcing ai features that no one wants down our throats
- the vanguard of ai should be open and accessible to all not locked in the cloud behind paywalls
Comment by FridayoLeary 19 hours ago
It's just not a well thought out comment. If we focus on the "better path forward", the entrance to which is only unlocked by the realisation that big techs achievements (and thus, profits) belong to humanity collectively... After we reach this enlightened state, what does op believe the first couple of things a traveller on this path is likely to encounter (beyond Big Techs money, which incidentally we take loads of already in the form of taxes, just maybe not enough)?
Comment by _DeadFred_ 18 hours ago
First you have tech's ability to scale. The ability to scale also has it creep new changes/behaviors into every aspect of our lives faster than any 'engine for change' could previous.
Tech also inherits, so you can treat it as legos using, what are we at, definitely tens, maybe hundreds of thousands of human years of work, of building blocks to build on top of. Imagine if you started every house with a hundred thousand human years of labor already completed instantly. No other domain in human history accumulates tens of millions of skilled human years annually and allows so much of that work to stack, copy, and propagate at relatively low cost.
And tech's speed of iteration is insane. You can try something, measure it, change it, and redeploy in hours. Unprecedented experimentation on a mass scale leading to quicker evolution.
It's so disingenuous to have tech valuations as high as they are based on these differentiations but at the same time say 'tech is just like everything from the past and must not be treated differently, and it must be assumed outcomes from it are just like historical outcomes'. No it is a completely different beast, and the differences are becoming more pronounced as the above 10Xs over and over.
Comment by greesil 19 hours ago
Comment by adolph 6 hours ago
Do we not all stand on the shoulders of giants? Will "big next" not take up where "big tech" leaves off one day?
Comment by relaxing 19 hours ago
Comment by mrwaffle 19 hours ago
Comment by mrwaffle 19 hours ago
Comment by FridayoLeary 20 hours ago
Comment by triceratops 20 hours ago
Comment by FridayoLeary 19 hours ago
On that note they say oil is dead dinosaurs, maybe have a word with Saudi Arabia...
Comment by dekhn 19 hours ago
Comment by triceratops 19 hours ago
Comment by blactuary 19 hours ago
Comment by mackeye 19 hours ago
Comment by jaybyrd 21 hours ago
Comment by donkey_brains 19 hours ago
Surprisingly, the answer he got was “none, because that’s not how AI works”.
Guess we’ll see if that registers…
Comment by MobiusHorizons 19 hours ago
But in all seriousness, ai does a pretty good job at impersonating VPs. It’s confidently wrong and full of hope for the future.
Comment by mayhemducks 4 hours ago
Comment by consumer451 18 hours ago
Phase 1: 1-2 weeks
Phase 2: 1 week
Phase 3: 2 weeks
8 to 12 hours later, all the work is done and tested.
The funny part to me was that if I had an AI true believer boss, I would report those time estimates directly, and have a lot of time to do other stuff.
Comment by ziml77 18 hours ago
Comment by whattheheckheck 18 hours ago
Tis the cycle
Comment by sublinear 19 hours ago
Comment by GolfPopper 21 hours ago
Comment by jacquesm 21 hours ago
Comment by dylan604 20 hours ago
Comment by myhf 4 hours ago
Comment by soulofmischief 21 hours ago
I have only become more creatively enabled when adopting these tools, and while I share the existential dread of becoming unemployable, I also am wearing machine-fabricated clothing and enjoying a host of other products of automation.
I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
Comment by overgard 19 hours ago
Comment by jacquesm 19 hours ago
That's by far not the worst that could happen. There could very well be an axe attached to the pendulum when it swings back.
> Not to mention it's bad business if nobody can afford to use AI because they're unemployed.
In that sense this is the opposite of the Ford story: the value of your contribution to the process will approach zero so that you won't be able to afford the product of your work.
Comment by soulofmischief 15 hours ago
Hatred of the technology itself is misplaced, and it is difficult sometimes debating these topics because anti-AI folk conflate many issues at once and expect you to have answers for all of them as if everyone working in the field is on the same agenda. We can defend and highlight the positives of the technology without condoning the negatives.
Comment by jacquesm 14 hours ago
I think hatred is the wrong word. Concern is probably a better one and there are many things that are technology and that it is perfectly ok to be concerned about. If you're not somewhat concerned about AI then probably you have not yet thought about the possible futures that can stem from this particular invention and not all of those are good. See also: Atomic bombs, the machine gun, and the invention of gunpowder, each of which I'm sure may have some kind of contrived positive angle but whose net contribution to the world we live in was not necessarily a positive one. And I can see quite a few ways in which AI could very well be worse than all of those combined (as well as some ways in which it could be better, but for that to be the case humanity would first have to grow up a lot).
Comment by soulofmischief 13 hours ago
And like anything else, it will be a tool in the elite's toolbox of oppression. But it will also be a tool in the hands of the people. Unless anti-AI sentiment gets compromised and redirected such that support for limiting access to capable generative models to the State and research facilities.
The hate I am referring to is often more ideological, about the usage of these models from a purity standpoint. That only bad engineers use them, or that their utility is completely overblown, etc. etc.
Comment by jacquesm 11 hours ago
There is a massive assumption there: that society as such will survive.
Comment by soulofmischief 9 hours ago
Comment by jacquesm 9 hours ago
Comment by soulofmischief 1 hour ago
Comment by soulofmischief 15 hours ago
I grew up very poor and was homeless as a teenager and in my early 20s. I still studied and practiced engineering and machine learning then, I still made art, and I do it now. The fact that Big Tech is the new Big Oil is besides the point. Plenty of companies are using open training sets and producing open, permissively licensed models.
Comment by johnnyanmac 20 hours ago
I'm not really a fan of the "you criticize society yet you participate in it" argument.
>I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
Comment by Wyverald 19 hours ago
> I'm not really a fan of the "you criticize society yet you participate in it" argument.
It seems to me that GP is merely recognizing the parts of technological advance that they do find enjoyable. That's rather far from the "I am very intelligent" comic you're referencing.
> The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
Just noting that GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.
Comment by johnnyanmac 19 hours ago
Machine fabrication is nice. Machine fabrication from sweatshop children in another country is not enjoyable. That's the exact nuance missing from their comment.
>GP simply voiced their opinion, which IMHO does not constitute "impedance" of those trying to fight for those futures.
I'd hope we'd understand since 2024 that we're in an attention society, and this is a very common tactic used to disenfranchise people from engaging in action against what they find unfair. Enforcing a feeling of inevitability is but one of many methods.
Intentionally or not, language like this does impede the efforts.
Comment by soulofmischief 15 hours ago
Me neither, and I didn't make such an argument.
> You seem to forget the blood shed over the history that allowed that tech to benefit the people over just the robber barons. Unimaginable amounts of people died just so we could get a 5 day workweek and minimum wage.
What does that have to do with my argument? What about my argument suggested ignorance of this fact? This is just another straw man.
> We don't get a benficial future by just laying down and letting the people with the most perverse incentives decide the terms. The very least you can do is not impede those trying to fight for those futures if you can't/don't want to fight yourself.
What an incredible characterization. Nothing about my argument is "laying down", perhaps it seems that way because you do not share my ideals, but I fight for my ideals, I debate them in public as I do now, and that is the furthest thing from "laying down" and "not fighting myself". You seem to be projecting several assumptions about my politics and historical knowledge. Did you have a point to make or was this just a bunch of wanking?
Comment by johnnyanmac 12 hours ago
The way you "debate" is exactly a patt of the problem. You come in deflecting and attacking instead of understanding and clarifying.
You didn't even give me a point to respond to that doesn't derail go way off topic. If you can't see it then I hope someone with more patience can help you out. But no point having a conversation with someone who approaches it like this (against site guidelines Good luck out there.
Comment by soulofmischief 9 hours ago
2. You decided to leave a comment on one of my comments, except it contained multiple straw man arguments and a gross mischaracterization, arguing in bad faith from the start.
3. I pointed out the issues with your comment, and asked if you had an actual point to make, or if it was just straw man arguments. I had no intention of continuing a discussion with you and do not need to provide you "points to respond to". I agree, when I have to stop and explain why your argument has issues, it derails the thread. That's the problem with straw man arguments.
4. It is your problem if you take issue with criticism and think it means I need to be "helped out" by "someone with more patience". That's quite a condescending response.
I can see that you didn't have a point to make, and instead elected to just take a bunch of negative jabs at me as retaliation for pointing out the errors in your argument.
So yes, let's end the conversation. It's quite ridiculous.
Comment by nozzlegear 20 hours ago
> I do not have selective guilt over modern generative tools because I understand that one day this era will be history and society will be as integrated with AI as we are with other transformative technologies.
This seems overly optimistic, but also quite dystopian. I hope that society doesn't become as integrated with these shitty AIs as we are with other technologies.
Comment by soulofmischief 15 hours ago
Of course, that might be less and less true about our work as time goes on. At some point in the future, hiring an engineer who refuses to use generative coding tools will be the equivalent of hiring someone today who refuses to use an IDE or even a tricked out emacs/vim and just programs everything in Notepad. That's cool if they enjoy it, but it's unproductive in an increasingly competitive industry.
Comment by nozzlegear 14 hours ago
Comment by callc 20 hours ago
Cool science and engineering, no doubt.
Not paying any attention to societal effects is not cool.
Plus, presenting things as inevitabilities is just plain confidently trying to predict the future. Anyone can san “I understand one day this era will be history and X will have happened”. Nobody knows how the future will play out. Anyone who says they do is a liar. If they actually knew then go ahead and bet all your savings on it.
Comment by peyton 20 hours ago
Comment by soulofmischief 15 hours ago
That doesn't mean I also must condone our use of the bomb, or condone US imperialism. I recognize the inevitability of atomic science; unless you halt all scientific progress forever under threat of violence, it is inevitable that a society will have to reckon with atomic science and its implications. It's still fascinating, dude. It's literally physics, it's nature, it's humbling and awesome and fearsome and invaluable all at the same time.
> Not paying any attention to societal effects is not cool.
This fails to properly contextualize the historical facts. The Nazis and Soviets were also racing to create an atomic bomb, and the world was in a crisis. Again, this isn't ignorant of US imperialism before, during or after the war and creation of the bomb. But it's important to properly contextualize history.
> Plus, presenting things as inevitabilities is just plain confidently trying to predict the future.
That's like trying to admonish someone for watching the Wright Brothers continually iterate on aviation, witnessing prototype heavier-than-air aircraft flying, and suggesting that one day flight will be an inevitable part of society.
The steady march of automation is an inevitability my friend, it's a universal fact stemming from entropy, and it's a fallacy to assume that anything presented as an inevitability is automatically a bad prediction. You can make claims about the limits of technology, but even if today's frontier models stop improving, we've already crossed a threshold.
> Anyone who says they do is a liar.
That's like calling me a liar for claiming that the sun will rise tomorrow. You're right; maybe it won't! Of course, we will have much, much bigger problems at that point. But any rational person would take my bet.
Comment by blibble 20 hours ago
I'd rather be dead than a cortex reaver[1]
(and I suspect as I'm not a billionaire, the billionare owned killbots will make sure of that)
Comment by Mars008 18 hours ago
Comment by malfist 21 hours ago
Comment by tsunamifury 19 hours ago
Comment by jaybyrd 21 hours ago
Comment by logicprog 19 hours ago
Comment by goalieca 19 hours ago
Comment by logicprog 18 hours ago
The deep analysis starts at this section: https://andymasley.substack.com/p/the-ai-water-issue-is-fake...
You can't just dismiss anything you don't like as AI.
Comment by pesus 21 hours ago
Comment by Sharlin 20 hours ago
Comment by Joel_Mckay 18 hours ago
Some are projecting >35% drop in the entire index when reality hits the "magnificent" 7. Look at the price of Gold, corporate cash flows, and the US Bonds laggard performance. That isn't normal by any definition. =3
Comment by snowwrestler 19 hours ago
For those having trouble finding the humor, it lies in the vast gulf between grand assertions that LLMs will fundamentally transform every aspect of human life, and plaintive requests to stop saying mean things about it.
As a contrast: truly successful products obviate complaints. Success speaks for itself. In TV, software, e-commerce, statins, ED pills, modern smartphones, social media, etc… winning products went into the black quickly and made their companies shitloads of money (profits). No need to adjust vibes, they could just flip everyone the bird from atop their mountains of cash. (Which can also be pretty funny.)
There are mountains of cash in LLMs today too, but so far they’re mostly on the investment side of the ledger. And industry-wide nervousness about that is pretty easy to discern. Like the loud guy with a nervous smile and a drop of sweat on his brow.
So much of the current discourse around AI is the tech-builders begging the rest of the world to find a commercially valuable application. Like the AgentForce commercials that have to stoop to showing Matthew McConaughey suffering the stupidest problems imaginable. Or the OpenAI CFO saying maybe they’ll make money by taking a cut of valuable things their customers come up with. “Maybe someone else will change the world with this, if you’ll all just chill out” is a funny thing to say repeatedly while also asking for $billions and regulatory forbearance.
Comment by datsci_est_2015 18 hours ago
Makes me consider: Dotcom domains, Bitcoin, Blockchain, NFTs, the metaverse, generative AI…
Varying degrees of utility. But the common thread is people absolutely begging you to buy in, preying on FOMO.
Comment by twoodfin 18 hours ago
Comment by snowwrestler 18 hours ago
Comment by Gene5ive 20 hours ago
Comment by selimthegrim 18 hours ago
Comment by i_love_retros 19 hours ago
Comment by Brajeshwar 18 hours ago
Comment by gradus_ad 19 hours ago
Comment by ares623 11 hours ago
If Jensen even as much as _plans_ for something other than AI, it will cause everyone else to doubt.
Comment by stego-tech 19 hours ago
Comment by olivierestsage 19 hours ago
Comment by hedayet 18 hours ago
Oh, and most of them had a crypto bag too.
<sigh>
Comment by Joel_Mckay 18 hours ago
Comment by twochillin 19 hours ago
Comment by willturman 18 hours ago
Comment by porkloin 21 hours ago
Comment by Froztnova 21 hours ago
I've never really been able to get into it either because it's sort of a paradox. If I agree, I feel bad enough about the actual issue that I'm not really in the mood to laugh, and if I disagree then I obviously won't like the joke anyways.
Comment by porkloin 21 hours ago
I find it unfunny for the same reason I don't find modern SNL intro bits about Trump funny. The source material is already insane to the point that it makes surface-level satire like this feel pointless.
Comment by Brian_K_White 20 hours ago
Comment by ares623 15 hours ago
Comment by madeofpalk 21 hours ago
Comment by johnnyanmac 20 hours ago
Maybe if we ever return to normal times and also don't let the other 90% of corruption stay where it's been for the past 40 years we can start to ease off the noise.
Comment by jaybyrd 21 hours ago
Comment by heliumtera 21 hours ago
Comment by b00ty4breakfast 21 hours ago
Comment by vivzkestrel 18 hours ago
Comment by rednafi 20 hours ago
Comment by random_duck 21 hours ago
Comment by blibble 21 hours ago
Comment by techblueberry 18 hours ago
https://www.darioamodei.com/essay/the-adolescence-of-technol...
Comment by blibble 18 hours ago
the whole thing reads as "it's going to be so powerful! give money now!"
Comment by heliumtera 21 hours ago
Comment by lifetimerubyist 19 hours ago
Comment by akomtu 19 hours ago
Comment by 20260126032624 18 hours ago
Comment by Joel_Mckay 17 hours ago
https://en.wikipedia.org/wiki/Competitive_exclusion_principl...
The damage is already clear =3
https://www.youtube.com/watch?v=TYNHYIX11Pc
Comment by kindawinda 18 hours ago
Comment by theLegionWithin 21 hours ago
Comment by notepad0x90 17 hours ago
What's your answer to this? How did it turn out for nuclear energy? If it wasn't for this sort of thinking we'd have nuclear power all over the world and climate issues would not have been as bad.
You should embrace it, because other countries will and yours will be left behind if you don't. That doesn't mean put up with "slop", but that also doesn't mean be hostile to anything labeled "AI" either. The tech is real, it is extremely valuable (I applaud your mental gymnastics if you think otherwise), but not as valuable as these CEOs want it to be or in the way they want to be.
On one hand you have clueless executives and randos trying to slap "AI" on everything and creating a mess. On the other extreme you have people who reject things just because it has auto-complete (LLMs :) ) as one of it's features. You're both wrong.
What Jensen Huang and other CEOs like Satya Nadella are saying about this mindless bandwagonning of "oh no, AI slop!!!" b.s. is true, but I think even they are too caught up in tech circles? Regular people don't to the most part feel this way, they only care about what the tool can do, not how it's doing it to the most part. But..people in tech largely influence how regular people are educated, informed,etc...
Look at the internet, how many "slop" sites were there early on? how much did it get dismissed because "all the internet is good for is <slop>"?
Forget everything else, just having an actual program.. that I can use for free/cheap.. on my computer.. that can do natural language processing well!!! that's insane!! Even in some of the sci-fi I've been rewatching in recent years, the "AI/Computer" in spaceships or whatever is nowhere near as good as chatgpt is today in terms of understanding what humans are saying.
I'm just calling for a bit of a perspective on things? Some are too close to things and looking under the hood too much, others are too far and looking at it from a distance. The AI stock valuation is of course ridiculous, as is the overhyped investments in this area, and the datacenter buildout madness. And like I said, there are tons of terrible attempts at using this tech (including windows copilot), but the extremes of hostility against AI I'm seeing is also concerning, and not because I care about this awesome tech (which I do), but you know.. the job market is rough and everything is already crappy.. I don't want to go through an AI market crash or whatever on top of other things, so I would really appreciate it on a personal level if the cause of any AI crash is meritocratic instead of hype and bandwagonning, that's all.
Comment by ares623 16 hours ago
I wasn’t old enough to argue against the internet. Plus to be fair to the ones who were, there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.
I wasn’t old enough to argue against social media and the surveillance it brought.
Now AI comes along. And I am old enough. And I am experienced enough in a similar space. And I have seen what similar technology have done and brought. And I have taken all that and my conscience and instinct tells me that AI is not a net good.
Previous generations have failed us. But we make do with the world we find ourselves born into.
I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.
All so they can open 20 PRs per day for their employers.
Comment by notepad0x90 8 hours ago
> , there was no prior tech that was anything like it to even make realistic guesses into what it would turn out to.
Same with LLMs.
> AI is not a net good.
You're falling into the same trap as previous generations when you do that. You won't actually end up fixing or improving the negative impacts of AI and your country/society will lose out big time in all sorts of ways.
Tech doesn't make things bad, people do to the most part. Where AI is abused, it needs legislation, not resistance and you should know it is a LOT more nuanced than that. How is an LLM language translator for tourists being loped in the same bucket as LLMs being used to target people for assasination? Your lack of nuance is laziness, no political or ideological stand can justify that laziness.
Nuclear energy had a lot of negatives, and people made the same types of arguments and outright banned it, next time you complain about climate change consider that your way of thinking might be part of the problem. Right now datacenter build outs are contributing to water scarcity for example, so instead of doing the hard and nuanced work of actually regulating and fixing that you oppose AI entirely. You do the easy thing, in the end we live in the real world and supply/demand economics rules, so your resistance is only performative at best, or catastrophic to the economy at worst. The latter part isn't for billionaries, and it isn't just for job markets, when the economy goes, all the climate change talk goes with it, all the EVs, green energy initiatives,etc... go, wars and crises increase, disease outbreaks increase. Doing the easy thing leads to this is my point, not that you need AI to prevent those.
> I find it absurd that experienced engineers today look at AI and believe it will make their children’s lives better, when very recent history, history they themselves lived through, tells a very different story.
100x it would! although whose children depends on who regulates it first. My bet is China and the EU will regulate the crap out of it and extract the most value for themselves and future generations. AI is just solution, a tool, it isn't magic as you very well know. Companies have being using ML for surveillance for a long time. The US gov was using life of pattern analysis ML ten years ago to pick out assassination targets in Afghanistan and Pakistan. You have a fundamental lack of laws and a broken system of governance, don't take that out on tech.
Comment by irishcoffee 21 hours ago
Someone coined a term for those of the general population who trust this small group of billionaires and defend their technology.
“Dumb fucks”
Comment by lovich 20 hours ago
I wonder what name the tech bros will come up to call us for the same feeling nowadays.
Comment by yoyohello13 18 hours ago
Comment by khana 21 hours ago
Comment by trhway 21 hours ago
Comment by zahlman 20 hours ago
Comment by kshri24 20 hours ago
Ridiculous to say the technology, by itself, is evil somehow. It is not. It is just math at the end of the day. Yes you can question the moral/societal implications of said technology (if used in a negative way) but that does not make the technology itself evil.
For example, I hate vibe coding with a passion because it enables wrong usage (IMHO) of AI. I hate how easy it has become to scam people using AI. How easy it is to create disinformation with AI. Hate how violence/corruption etc could be enabled by using AI tools. Does not mean I hate the tech itself. The tech is really cool. You can use the tech for doing good as much as you can use it for destroying society (or at the very minimum enabling and spreading brainrot). You choose the path you want to tread.
Just do enough good that it dwarfs the evil uses of this awesome technology.
Comment by budududuroiu 20 hours ago
Democratisation of tech has allowed for more good to happen, centralisation the opposite. AI is probably one of the most centralisation-happy tech we've had in ages.
Comment by pixl97 19 hours ago
Capitalism demands profits. Competition is bad for profits. Multiple factories are bad for profits. Multiple standards are bad for profits. Expensive workers are bad for profits.
Comment by mrnaught 19 hours ago
Comment by wk_end 20 hours ago
Not really - it's math, plus a bazillion jigabytes of data to train that math, plus system prompts to guide that math, plus data centers to do that math, plus nice user interfaces and APIs to interface with that math, plus...
Anyway, it's just kind of a meaninglessly reductive thing to say. What is the atom bomb? It's just physics at the end of the day. Physics can wreck havoc on the world; so can math.
Comment by johnnyanmac 20 hours ago
That said, their thinking is that this can remove labor from their production, all while stealing works under the very copyright they setup. So I'd call that "evil" in every conventional sense.
>Just do enough good that it dwarfs the evil uses of this awesome technology.
The evil is in the root of the training, though. And sadly money is not coming from "good". I don't see any models focusing on ensuring it trains only on CC0/FOSS works, so it's hard to argue of any good uses with evil roots.
If they could do that at the bare minimum, maybe they can make the argument over "horses vs cars". As it is now, this is a car powered by stolen horses. (also I work in games, and generative AI is simply trash in quality right now).
Comment by pixl97 19 hours ago
This also ignores the broken fucking copyright system that ensures once you create something you get many lifetimes of fucking off without having to work, so if genAI kills that I won't shed a tear.
Comment by robinhoode 19 hours ago
AI is literally trained on by humans, used by humans. If humans are doing awful things with it, then it's because humans are awful right now.
I strongly feel this is related to the rise of fascism and wealth inequality.
We need a great conflict like WW2 to release this tension.
Comment by gip 20 hours ago
Many people would rather argue about morality and conscience (of our time, of our society) instead of confronting facts and reality. What we see here is a textbook case of that.
Comment by tdb7893 20 hours ago
Comment by SpicyLemonZest 20 hours ago
Comment by tdb7893 18 hours ago
More my confusion is the person I was responding to complaining about people arguing morality, which seems incredibly important to discuss. Lack of facts obviously makes discussions bad but there's definitely not some dichotomy with discussing morality (at least not with the people I know. My issue has not nearly been as much with people arguing morality, which is often my more productive arguments, and more people with a fundamentally incompatible view on what the facts are).
Comment by technofastest 19 hours ago
Comment by datsci_est_2015 18 hours ago
Comment by datsci_est_2015 9 hours ago
When I see the word “facts” used like this, I feel there’s a parallel to the way the word “respect” is used abusively, as outlined in this Tumblr post that has stuck with me for years:
https://soycrates.tumblr.com/post/115633137923/stimmyabby-so...
> Sometimes people use “respect” to mean “treating someone like a person” and sometimes they use “respect” to mean “treating someone like an authority”
> and sometimes people who are used to being treated like an authority say “if you won’t respect me I won’t respect you” and they mean “if you won’t treat me like an authority I won’t treat you like a person”
> and they think they’re being fair but they aren’t, and it’s not okay.
The word “facts” can be used abusively, as in “My facts prove my worldview, your “facts” are arguments based on emotion.”
Comment by socialcommenter 17 hours ago
Someone who's clear-eyed about the facts is much more likely to have a guilty conscience/think someone's actions are unconscionable.
I don't mean to argue either side in this discussion, but both sides might be ignoring the facts here.
Comment by johnnyanmac 20 hours ago
okay, what are the "facts and reality" here? If you're just going to say "AI is here to stay", then you 1) aren't dealing with the core issues people bring up, and 2) aren't brining facts but defeatism. Where would be if we used that logic for, say, Flash?
Comment by mattgreenrocks 19 hours ago
Comment by daft_pink 21 hours ago
Comment by Lerc 21 hours ago
You can still criticise without being mean.
Comment by donkey_brains 20 hours ago
Comment by thinkingtoilet 21 hours ago
Comment by Lerc 20 hours ago
I can certainly criticize specific things respectfully. If I prioritised demonstrating my moral superiority I could loudly make all sorts of disingenuous claims that won't make the world a better place.
I certainly do not think people should be making exploitative images in Photoshop or indeed any other software.
I do not think that I should be able choose which software those rules apply to based upon my own prejudice. I also do not think that being able to do bad things with something is sufficient to negate every good thing that can be done with it.
Countless people have been harmed by the influence of religious texts, I do not advocate for those to be banned, and I do not demand the vilification of people who follow those texts.
Even though I think some books can be harmful, I do not propose attacking people who make printing presses.
What exactly are you requiring here. Pitchforks and torches? Why AI and not the other software that can be used for the same purposes?
If you want robust regulation that can provide a means to protect people from how models are used then I am totally prepared (and have made submissions to that effect) to work towards that goal. Being antagonistic works against making things better. Crude generalisations convince no-one. I want the world to be better, I will work towards that. I just don't understand how anyone could believe vitriolic behaviour will result in anything good.
Comment by chasd00 19 hours ago
Comment by paodealho 18 hours ago
Stable Diffusion enabled the average lazy depraved person to create these images with zero effort, and there's a lot of these people in the world apparently.
Comment by bigstrat2003 18 hours ago