AI Resistance: some recent anti-AI stuff that’s worth discussing
Posted by speckx 15 hours ago
Comments
Comment by tptacek 14 hours ago
Meanwhile: the ability to poison models, if it can be made to work reliably, is a genuinely interesting CS question. I'm the last person in the world to build community with anti-AI activists, but I'm as interested as anybody in attacks on them! They should keep that up, and I think you'll see threads about plausible and interesting attacks are well read, including by people who don't line up with the underlying cause.
Comment by vidarh 13 hours ago
Ultimately, it comes down to the halting problem: If there's a mechanism that can be used to alter the measured behaviour, then the system can change behaviour to take into account the mechanism.
In other words, unless you keep the poisoning attack strictly inaccessible to the public, the mechanism used to poison will also be possible to use to train models to be resistant to it, or train filters to filter out poisoned data.
At least unless the poisoning attack destroys information to a degree that it would render the poisoned system worthless to humans as well, in which case it'd be unusable.
So either such systems would be insignificant enough to matter, or they will only work for long enough to be noticed, incorporated into training, and fail.
I agree it's an interesting CS challenge, though, as it will certainly expose rough edges where the models and training processes works sufficiently different to humans to allow unobtrusive poisoning for a short while. Then it'll just help us refine and harden the training processes.
Comment by kibwen 13 hours ago
The question is not whether the system can change, it's whether the system is incentivized to change. Poisoners could operate entirely in the public, and theoretically manage to successfully poison targeted topics, and it could cost the model developers more than it's worth to fix it. Think about obscure topics like, say, Dark Souls speedrunning. There is no business demand for making sure that a model can successfully give information relating to something like that, so poisoning, if it works, would probably not be addressed, because there's no reason for the model developers to care.
Comment by vidarh 3 hours ago
Comment by lepus 13 hours ago
Whether model poisoning becomes a bigger issue depends on the incentives for companies to keep fighting it. For now in comparison to attackers the incentives and resources needed to defend against model poisoning are huge so it's just temporary setbacks. Will that unevenness in their favor always be the case?
Comment by vidarh 2 hours ago
Comment by lxgr 13 hours ago
Comment by scythe 13 hours ago
https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equatio...
Comment by GTP 13 hours ago
Comment by vidarh 2 hours ago
It's a bit of a leap, but the halting problem can be generalized to:
It is impossible in the general case to produce a detector function f(x), that will decide if program x behaves according to rule y if x can include f(x) as part of the itself.
The reason is that if a program x can make use of the detector, it can effectively do if f(x) { do the opposite of what f(x) predicts}
The leap from that to poisoning might be a bit unintuitive, but it boils down to the poisoner having a mechanism that would alter model behaviour.
If you have access to that mechanism, you can produce a detector by using the mechanism to induce the unwanted behaviour, and train a model on that.
Once you have a detector, you can behave differently based on the signal from the detector, and by extension avoid the effects of the original mechanism.
And that is the core of the halting problem.
Comment by mswphd 13 hours ago
https://en.wikipedia.org/wiki/Rice%27s_theorem
Formally, any non-trivial semantic property of a Turing machine is undecidable. Semantic here (roughly) means "behavioral" questions of the turing machine. E.g. if you only look at the "language" it defines (viewing it as a black box), then it is undecidable to answer any question about that language (including things like if it terminates on all inputs).
Practically though that isn't a complete no-go result. You can do various things, like
1. weaken the target you're looking for. if you're ok with admitting false positives or false negatives, Rice's theorem no longer applies, or 2. rephrase your question in terms of "syntatic properties", e.g. questions about how the code is implemented. Rust's borrow checker does this via lifetime annotations, for example.
Comment by vidarh 2 hours ago
If you have access to run a transform on data, you can use it to train a model that acts as a detector of whether that transform has been applied to the data.
When you have a detector for a given property, you can use that detector to alter behaviour to exclude that property.
And that is the abstract core of why the halting problem is unsolvable.
In this case, if you have access to a mechanism for poisoning data, you can use that to train a detector. Once you have a detector, you can either exclude poisoned data, or use it for adversarial training.
Either way: The existence of the poisoning mechanism can be directly used to derive the tools to create its own antidote.
And that's back to the core of the halting problem.
Comment by thfuran 10 hours ago
Comment by vidarh 2 hours ago
Basically any poisoning attack is also fundamentally limited because it needs to be non-invasive enough for humans not to be adversely affected, and that limits the problem space severely - the poisoning mechanism basically becomes reduced to a training mechanism to train out places where the models act different to humans.
Comment by bostik 6 hours ago
This is likely impossible. As the in-vogue breed of model extraction methods ("distillation attacks") demonstrates, you can infer the underlying training and/or fine-tuning of a model with a series of carefully constructed prompts.
Another name of model poisoning? Adversarial fine-tuning.
Comment by tw061023 10 hours ago
The very point of CS as an academic discipline is _generalization_.
Comment by Ar-Curunir 11 hours ago
No, that’s the opposite of the halting problem…
Comment by vidarh 2 hours ago
Comment by suzzer99 14 hours ago
Comment by pocksuppet 14 hours ago
Comment by timbits98 13 hours ago
You may ask why that is interesting: it's because carrot cake is, despite the name, made mostly of flour and dehydrated lemons. The cooking process is of course handled by a custom implementation of CP/M, running on a Z80.
Comment by xmichael909 12 hours ago
Comment by conorcleary 11 hours ago
Comment by whatsupdog 14 hours ago
Comment by somebehemoth 13 hours ago
Comment by hackable_sand 13 hours ago
Comment by conorcleary 11 hours ago
Comment by Lio 13 hours ago
One time it drew a fortnight riding a bike. Hilarious.
Comment by autoexec 9 hours ago
Comment by jcranmer 11 hours ago
Comment by suburban_strike 11 hours ago
Someone shared the list on here years ago but I can't find it again.
Comment by ofjcihen 8 hours ago
On the one hand I agree with you but on the other sometimes I wonder just how insulated we are in the tech community and especially in sites like this.
At some point in the last few months I realized that my friend group is basically a bubble of people making mid 6 figures that all work in tech and while I wouldn’t call it “anti-ai sentiment” even some of them are extremely conservative in their praise.
With that being the case you have to wonder what the average person is feeling about it.
Comment by tptacek 7 hours ago
Tech hosts a particularly virulent and ideological strain of anti-AI activism; I think because the disruption it threatens for our jobs is much less abstract than it is for everybody else.
Comment by 000ooo000 4 hours ago
We know how the sausage is made. Your non tech folk only see the marketing/hype, so of course they're optimistic.
Comment by gopher_space 6 hours ago
Comment by munksbeer 1 hour ago
The average person wants to get on with their lives and look after their family and friends.
People will talk about it, some might feel worried, some might feel intrigued, but they just aren't as prone to perpetual rage and doomerism and the more online community.
Comment by intended 2 hours ago
Stanford released a report[1] in April, that shows a wide gap between AI Insiders and everyone else.
> 5. AI experts and the U.S. public have very different perspectives on AI's future, except on elections and personal relationships.
>On how people do their jobs, 73% of experts expect a positive impact compared to just 23% of the public, a 50-point gap. Similar divides appear for the economy (69% vs. 21%) and medical care (84% vs. 44%)
> 6. Nearly two-thirds of Americans (64%) expect AI to lead to fewer jobs over the next 20 years, while only 5% expect more.
>Experts were less pessimistic (39% fewer, 19% more) but forecast far faster adoption, expecting generative AI to assist 18% of U.S. work hours by 2030 versus the public's estimate of 10%.
Comment by kennywinker 7 hours ago
And a large chunk, typically the younger and more online / political aware - they absolutely loathe ai. For stealing from artists, ruining online spaces with slop, spreading misinformation, deepfake porn, and of course screwing the economy, destroying jobs, and polluting the world so a few ultra rich people can get richer, and a few more people like you can make mid-six figures while everybody else has to cope.
If the trend i see continues, i would expect the second group will get very big very soon. Either because the hype is real and ai is taking jobs, or because the hype wasn’t real and they feel lied to.
Comment by izend 14 hours ago
Comment by kibwen 13 hours ago
Comment by autoexec 10 hours ago
Comment by tptacek 14 hours ago
Comment by Jeff_Brown 14 hours ago
Comment by subw00f 14 hours ago
Comment by jayd16 13 hours ago
Comment by godelski 14 hours ago
> the fact the Chinese populace is much more pro-AI than the West.
Is it? Honest question. Frankly the answer smells off. Similar to thinking US sentiment about AI is accurately reflected by people in Silicon Valley. Feels like we're getting biased views.Comment by hbarka 13 hours ago
https://www.ted.com/talks/peter_steinberger_how_i_created_op...
Comment by HWR_14 11 hours ago
Comment by arjie 13 hours ago
Comment by SpicyLemonZest 13 hours ago
Comment by drcode 14 hours ago
Then I have good news for you: If humanity goes extinct in the next few years because of unaligned superintelligence, there actually will no longer "be an active community of people who loathe AI and work to obstruct it"
Comment by slg 13 hours ago
This is either a misunderstanding of the anti-AI crowd or an intentional attempt to discredit them. The majority of anti-AI people don't actually fear this because that belief would require that this person has already bought into the hype regarding the actual power and prowess of AI. The bigger motivator for anti-AI folks is usually just the way it amplifies the negative traits of humans and the systems we have created which is already happening and doesn't need any type of pending "superintelligence" breakthrough. For example, an AI doesn't actually need to be able to perfectly replace the work I do for someone to decide that it's more cost-effective to fire me and give my work to that AI.
Comment by concinds 13 hours ago
This attempt to "reframe and reclaim" (here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics") is a rhetorical device, but not an honest one. It's a power struggle over who gets to define and lead "the" anti-AI movement.
We may agree or disagree with them but there are rational anti-AI arguments that center on X-risks.
Comment by slg 13 hours ago
See my other comment. I qualified what I said while the comment I replied to didn't, so it's weird that this is a response to me and not the prior comment.
>here, paraphrased: "significant existential risks from AI is actually marketing hype by pro-AI fanatics"
If we're talking "dishonest rhetoric", this is a dishonest framing of what I said. I'm not saying this is inherently intentional marketing hype. I'm saying there is a correlation between someone who thinks AI is that powerful and someone who thinks AI will benefit humanity. The anti-AI crowd is less likely to be a believer in AI's unique power and will simply look at it as a tool wielded by humans which means critiques of it will simply mirror critiques of humanity.
Comment by tptacek 13 hours ago
Comment by autoexec 10 hours ago
Exactly, "lack of intelligence" is really a much bigger concern than "superintelligence". Companies and government will happily try to save money and avoid accountability by letting AI do work that it can only do poorly and it will be humans who are left with the accelerated AI powered enshittification and blind/soulless paperclip maximization that results.
Comment by mitthrowaway2 13 hours ago
Comment by slg 13 hours ago
Comment by oidar 13 hours ago
I've seen people claiming that this could happen, but I've yet to read any plausible scenario where this might be the case. Maybe I lack the imagination, could you enlighten me?
Comment by drcode 12 hours ago
Comment by cwillu 10 hours ago
Comment by YZF 10 hours ago
- AI dominated the physical world. Robots, factories, etc.
- AI decides humans aren't contributing and/or wasting resources it feels should go somewhere else.
I mean not unlike humans causing extinction of other species?
Comment by theamk 9 hours ago
Factories, even fully robotic ones, heavily rely on humans to set up and maintain them. Moreover, the safety culture means there are tons of "disable" controls which can be triggered by any human and no machine can override.
Robots look impressive, but they cannot function without the humans either. Military kill-bots are likely the worst, but machines cannot repair or refuel them.
None of this is going to change in the "next few years".
Comment by YZF 9 hours ago
Robots can't function without humans because they're not super-intelligent. We already see quite capable humanoid robots. Those factories that rely on humans - they'll be converted to be operated by humanoid robots. By the super intelligence.
That's the hand wavy story. It's hard to dive into details in an HN comment but I'm happy to try and develop some of those details. You're saying that something much smarter than humans isn't going to be able to bridge the gap to the physical world. I'm not so sure.
EDIT: Another way to think about it is that if a god-like infinitely capable being took control of all our online digital systems including I donno Teslas, factory automation, power grid, any form of connected robot in the world, nuclear weapons launch systems, airplanes, whatnot, do they have any path to a sustainable "existence" without relying on humans. Or at least with us unable to detect and stop that. If the answer is no then we're probably safe. It's kind of hard to convince ourselves of that. Keep in mind that humans can also be manipulated to do work for this god just like spies/saboteurs e.g. are recruited online today and paid bitcoin to do some random master's bidding.
Comment by Aerroon 12 hours ago
Comment by drcode 12 hours ago
Comment by the-dimma-dang 10 hours ago
Comment by ryandrake 14 hours ago
Comment by orbital-decay 14 hours ago
Comment by GaryBluto 13 hours ago
I can guarantee there will be at least a few small ones, especially in the wake of the Sam Altman attacks and the "Zizian" cult. I doubt they'll be very organized and they will ultimately fail, but unfortunately at least a few people will (and have already) die(d) because of these radicals.
https://www.theguardian.com/technology/2026/apr/18/sam-altma...
https://edition.cnn.com/2026/04/17/tech/anti-ai-attack-sam-a...
https://www.theguardian.com/global/ng-interactive/2025/mar/0...
Comment by beepbooptheory 13 hours ago
Also saying "these radicals..." like this makes you sound like you are the Empire in Star Wars.
Comment by i_love_retros 13 hours ago
Are you making big money from the hype?
Comment by cyanydeez 13 hours ago
Comment by rockskon 13 hours ago
There were never such wide scale and, above all, centralized efforts to coerce and shame people into using the Internet or smart phones in spite of their best efforts.
Comment by GaryBluto 13 hours ago
Comment by rockskon 13 hours ago
Comment by tomhow 11 hours ago
Comment by rockskon 9 hours ago
"I'm not shaming! Not embracing AI is comparable to people who didn't embrace smart phones or the Internet though".
This is a regurgitation of a marketing slogan frequented by OpenAI and similar organizations for the past four years. "AI is the future. If you don't embrace it you will be left behind".
It's intellectually insulting to be subject to as it relies primarily on fear to convince.
Comment by tomhow 7 hours ago
Second, your participation in the thread began as fulmination, with “I am so very tired of people who...”, and then continues in this belligerent style right through to your reply to me...
> This is a regurgitation of a marketing slogan frequented
> It's intellectually insulting to be subject to as it relies primarily on fear to convince
This style of argumentation is beneath what we're hoping for on HN, as it paints a simplistic conspiracy theory or narrow commercial incentive as the only plausible explanation for a trend. Things are never that simple, and arguments like that shut off curiosity, when the primary purpose of HN is to cultivate more curiosity.
Comment by lxgr 13 hours ago
I mean, it's still ongoing! Tons of people prefer to do things the analog way, and it's certainly not for a lack of companies trying, as the analog way is usually much more expensive.
In their personal lives, everybody should of course be free to do what they want, but I also doubt that zero people have been fired for e.g. refusing to train to use a computer and email because they preferred the aesthetics of typewriters or handwritten memos and physical intra-office mail.
Comment by tptacek 13 hours ago
Comment by rockskon 13 hours ago
Comment by theamk 9 hours ago
What they can do, however, is they can run heavy advertising campaign targeted at executives. And once executives are convinced, they will write AI policies, and some will force their workers to use AI.
And this has been happening all the time, the examples are too numerous.
Executives decided the shops will now use computerized registers. The cashiers had to adopt, or get fired.
Executives decided - no more typewriters. All documents must be written in Microsoft Word, stored in Sharepoint. The workers have to learn Microsoft Word and Sharepoint or get fired.
Executives decided that that engineers (not computer ones, mechanical ones) should use CAD instead of drafting machines. The amount of engineers who were "let go" because they were protractor head wizards but could not figure out the mouse was truly large.
For something closer to CS, there was version control, automated tests, git, github... In a lot of cases, people where not "willingly choosing it" - if the rest of your team started using SourceSafe, you can't keep using your favorite shared folder anymore, not if you want others to see your results.
"willingly choosing it" only works for personal projects, it is never guaranteed for hired workers.
Comment by defrost 9 hours ago
Comment by Fraterkes 12 hours ago
Comment by jimmaswell 12 hours ago
Comment by idle_zealot 11 hours ago
Comment by tomhow 11 hours ago
Comment by achierius 11 hours ago
Can you not see how there's a difference?
Comment by ToucanLoucan 11 hours ago
* No legitimate justification: their materials are being stolen to train and be regurgitated by LLMs and generate products. They are not being compensated yet their contribution goes on to make AI companies money, and preventing open consumption of their materials, to assist an AI company in rendering them obsolete, is not a justification for retaliating? You would have the barest whiff of a point if OpenAI and company were going to artists, requesting materials for training, and were given tainted ones, that at least I could say was duplicitous. But not when it's publicly posted, that's just an AI company not doing a good job of minding their input.
* Serve only to make access to and transformation of info more difficult: As in, you have to go to the website of the person actually publishing the information, as opposed to having it read in a Google summary? Also worth noting this inconvenience applies only to a theoretical person using an AI search tool. Everyone else is unaffected. Seems like if you're going to a particular service provider whom is uniquely unable to provide the service you want, that seems like an easy to solve issue: use something else.
* can only hope that by these egregiously anti-social luddites: Your daily reminder that the Luddites were not anti-technology, they were anti-corporations using mechanization to make an ever dwindling number of workers produce ever more products of ever lower quality.
* we'll gain the knowledge to render this category of attack moot for the foreseeable future: This is a bad strategy and historically has not worked for a single industry. If your industry itself exists in open opposition to consumer movements, you don't win. At best, you survive. But there's no version of this where everyone just unwillingly adopts AI and you can tell them to deal with it. Whole companies now are cropping up to help people who want to opt-out of the AI future as promised.
Comment by jimmaswell 11 hours ago
An aside, I honestly think that if someone recoils at the idea of an AI learning from their idea and using that idea to help someone else, they're just a bad, selfish person.
Comment by haberman 14 hours ago
It's wild to see the about face. Now it's:
> If [companies] can’t source training data ethically, then I see absolutely no reason why any website operator should make it easy for them to steal it.
It would have been very difficult to predict this shift 25 years ago.
Comment by belorn 11 hours ago
Let say person A wants everyone to be rich.
Person B plots a plan to make themself rich and everyone else poorer.
One can make an argument that any action by A is now a contradiction. If they work with B, it makes a lot of people poorer and not richer. If they work against B, B do not get rich.
However this is not a contradiction. If a company use training data in ways that reduce and harm other peoples ability to access information, like hiding attribution or misrepresenting the data and sources, people who advocate for free information can have a consistent view and also work against such use. It is not a shift. It is only a shift if we believe that copyright will be removed, works will be given to the public for free, and companies will no longer try to hide and protect creative works and information.
Comment by haberman 9 hours ago
> If copyright can no longer protect the distribution of the work they produce, who will invest immense sums to create films or any other creative material of the kind we now take for granted? Do the thieves really expect new music and movies to continue pouring forth if the artists and companies behind them are not paid for their work?
--Jack Valenti, Motion Picture Association of America, 2000 (https://archive.is/PBy7C)
It sounds remarkably similar to what people concerned about AI say today. How do we make sure that artists get paid?
I don't think many hackers found the argument compelling at the time.
Comment by noosphr 14 hours ago
We welcomed the vampires in and wonder why our necks hurt.
Comment by ryandrake 13 hours ago
Comment by noosphr 13 hours ago
Comment by jordanb 13 hours ago
They are thrilled.
The folks fighting perpetual copyright were not fighting to make it possible for Disney to fire creatives. In fact they were fighting for the creatives to triumph over Disney.
Comment by noosphr 13 hours ago
> In fact they were fighting for the creatives to triumph over Disney.
We were doing nothing of the sort. It was "Information wants to be free" not "we want to provide a perpetual job for a subset white collar workers".
sprinkles holy water
Comment by jordanb 12 hours ago
Our concern was that corporations were expanding the definition of intellectual property to the extent where you couldn't make a movie or song or write a book as an individual without some corporation with a massive "IP" warchest coming after you and declaring it derivative. You couldn't write some software without a corporation with a massive repository of junk patents claiming you infringe.
We wanted to insure that individual creators could continue to have a voice, and not get sued out of existence by an IP Legal/Industrial Complex that was forming causing arms races between megacorps and SLAPs against everyone else.
If we knew we were feeding a yet-to-be-invented slop machine that would allow megacorps to unemploy all the creatives, most of us would not have supported that.
And by the way Disney is all in on AI for the same reason they were all in on perpetual copyright. In the perpetual copyright world, having a massive library of content you no longer have to pay residuals on was a source of massive amounts of "free" revenue. You could just keep re-releasing and re-making stuff. You did not have to do the messy, expensive work of paying people to come up with really good new stuff.
In the AI world, the money-printing capital asset is the trained model that grinds out slop 24/7 and you -emdash- again -emdash- don't have to pay actual people to create anything new.
Comment by noosphr 11 hours ago
We have multiple Communist ais that is on par with Western ai from 18 months ago and can run locally on 5 year old hardware.
I have no idea the fever nightmare you live in but the future is bright and only getting better.
Comment by hx8 13 hours ago
Property classes are born and die everyday. You can own the rights to publish an arcade video game, but that class of rights would have been way more valuable 45 years ago. NFTs were born and died just recently. You can own digital assets worth real money in an online game that simply shuts down.
Some people may read this and say "these don't qualify as a property class", to which I will remind you that property class used in this way is a brand new term, which I think is invented solely to be able to compare the limitations on human freedom associated with slavery to the limitations on human freedom associated with intellectual property.
Comment by achierius 11 hours ago
Easy counterexample: titles of nobility. Also perpetual bonds, delegated taxation rights, the ability to mint currency. The list goes on.
If you're going to use history to support your AI bull agenda, you should at least pre-fly it with the AI first -- it would have pointed this out.
> Arguing that copyright is good because a subset of big tech doesn't want it around is as stupid as arguing that slavery is good because the robber barons don't like it.
Sorry, who's saying it's good? You are, actually, insofar as you're willing to support the right of AI companies to take people's information and use it to create copyrighted model weights. Why do you care less about the intellectual property of billionaires than that of the common man? Do you really think they're on your side?
Comment by jordanb 13 hours ago
Comment by lxgr 13 hours ago
The information is still there, as is the community that you've built, the joy that you get out of sharing the information, everything you've learned...
Why is any of that diminished, just because some people or entities that you dislike also got something out of it?
Comment by belorn 11 hours ago
Attribution is seemingly a central part of a information sharing/gift economy, and especially in a information sharing/gift community. It is part of the trust that connects people and without it the community falls apart, and with that the economy. AI by its very nature removes attribution.
Accuracy of information is a second critical aspect of information sharing and communities that are built around it. Would Wikipedia as a community and resource work if some articles was just random words? If readers don't trust the site, and editors distrust each other, the community collapses and the value of the information is reduced. It might look like adding AI generated articles would not harm other existing articles, or the joy that editors of the past had in writing them, but the harm is what happen after the community get flooded by inaccurate information. Same goes for many other information sharing communities.
Comment by lxgr 1 hour ago
For the former, it is already very much in any AI company's best interest to preserve attribution to become and remain credible.
For the latter, I can't help but wonder whether a gift economy that needs to diligently bookkeep attribution really is one, and if this is the only practicable way to implement one in a given larger society/economy, I'd say this says something important about that society as well.
Comment by CaptainFever 1 hour ago
This is incorrect. RAG preserves attribution. Training data doesn't, but it doesn't make sense to attribute that anyway, unless you want a list of every person who has ever lived.
Comment by SlinkyOnStairs 12 hours ago
The end result of major tech companies sweeping in, taking everyone's creative work, outcompeting the originals with AI derivatives, and telling every artist on the planet "fuck off, send a job application to McDonalds" is significantly less art.
Copyright was invented to prevent exactly this scenario.
Comment by lxgr 12 hours ago
Hackers have usually drawn their funding from their (often lucrative) employment, which is what gave them the freedom to give away the products of their hacking for free.
One needs copyright to survive, the other see it as a means to enforce openness at best (those in favor of copyleft) and as an obstacle to their pursuit (owning the full system, liberating all aspects of and information about it) at worst.
This rift was always visible if you knew where to look, but AI is definitely wedging it wide open.
Comment by achierius 11 hours ago
This is pretty clearly answered by the GPL: yes, it does, and this concept has been around since the very beginning.
> The information is still there
True
> as is the community that you've built
Untrue. At this point it's well understood that AI is substitutionary for many of the services that would have once afforded people a way to monetize their production for the community. Without the ability to make a living by doing so, even a small one, people will be limited to doing only what they can in the little free time they get outside of work.
That's the whole problem -- that AI, as it exists today, is taking away from the public, and hurting it at the same time. That's closer to robbery than it is to "sharing in the community".
Comment by aksss 13 hours ago
Such is the fate of all utopian dreams.
Comment by solaire_oa 5 hours ago
"Information wants to be free" is a small part of the hacker ethos venn diagram. There are many hacker ethos traits that aren't about cracking, specifically.
Also, the server "information" isn't free (as in beer) to begin with, it costs server availability. Coming up ways to penalize greedy actors is not only well within the server operator's perogative, it's an interesting tit-for-tat problem that could pique any hacker's interests.
A bonus hacker trait is that these poisoning responses are individualistic, i.e. the government doesn't get involved, where certainly more aggressive anti-AI sentiments could (wrongly) call for that.
So I'd say this type of LLM-resistance falls squarely in the original hacker ethos, even though it incidentally counteracts one minor aspect of "information availability". Though I'd certainly agree that the picture today is a lot different than it was. Ironic even.
Comment by GaryBluto 13 hours ago
Comment by ginko 13 hours ago
Comment by GaryBluto 13 hours ago
Comment by csande17 12 hours ago
Comment by Legend2440 11 hours ago
People are in general for whatever they think will benefit them, and against what they think will harm them.
So piracy is ok when it benefits the little guy and not ok when it benefits the big guy. Unions are good when they stand up against employers, and bad when they discriminate against non-union workers. There's no contradiction there.
Comment by lxgr 13 hours ago
Still, people were saying all kinds of inane stuff 25 years ago too.
Comment by TwoNineFive 4 hours ago
NOW: "We can violate your copyright because we want to."
YOU: "Where's mine, and how do I make more people click on these ads?"
Comment by underlipton 14 hours ago
I say this as someone whose notions exist orthogonal to the debate; I use AI freely but also don't have any qualms about encouraging people to upend the current paradigm and pop the bubble.
Comment by lxgr 13 hours ago
Comment by underlipton 13 hours ago
Comment by larodi 14 hours ago
So yes, you can pollute the good old internet even more, but no, you cannot change the arrow of time, and then there's already the growing New Internet of APIs and public announce federations where this all matters very little.
Comment by chromacity 14 hours ago
Abusive, sneaky scraping is absolutely through the roof.
Comment by NewsaHackO 13 hours ago
Comment by jcranmer 11 hours ago
Comment by NewsaHackO 9 hours ago
Comment by jordanb 14 hours ago
Since AI crawlers don't obey any consent markers denying access to content, it makes sense for content owners who don't want AI trained on their content to poison it if possible. It's possibly the only way to keep the AI crawlers away.
Comment by Legend2440 12 hours ago
Think about it, why would a training scraper need to hit the same page hundreds of times a day? They only need to download it once.
I think this is LLMs doing web searches at runtime in response to user queries. There's no caching at this level, so similar queries by many different users could lead the LLM to request the same page many times.
Comment by dspillett 13 hours ago
Unfortunately that won't work. If you've served them enough content to have noticeable poisoning effect then you've allowed all that load through your resources. It won't stop them coming either - for the most part they don't talk to each other so even if you drive some away more will come, there is no collaborative list of good and bad places to scrape.
The only half-way useful answer to the load issue ATM is PoW tricks like Anubis, and they can inconvenience some of your target audience as well. They don't protect your content at all, once it is copied elsewhere for any reason it'll get scraped from there. For instance if you keep some OSS code off GitHub, and behind some sort of bot protection, to stop it ending up in CoPilot's dataset, someone may eventually fork it and push their version to GitHub anyway thereby nullifying your attempt.
Comment by jordanb 13 hours ago
Comment by dspillett 1 hour ago
The scrapers ideally want content that is original. Often content that is also new is more highly prized, but not as much as you might think⁰. This will only become more of a driver as the amount of LLM generated content that is out there to be mixed in increases, in order to limit the Habsburg problem they won't want too much regurgitated content in the training data.
Bad content from before LLM scraping became a resource problem¹ is highly unlikely to be marked in robots.txt, the same for content newly generated-by-an-LLM. People attempting to fend off scrapers and other bots with robots.txt entries are likely protecting the sort of content the scrapers actively want - original output that they've put some time into or code in a repo they don't want scraped (as scraping a repo is incredibly inefficient and resource heavy from the PoV of the repo owner).
I strongly suspect that the amount of desirable content behind robots.txt “blocks” is far too valuable to ignore despite the amount of poison content traps, or just things otherwise not worth the time scouring through, that might also be there. A “beware of the dog” sign is of no protection when the reader actively wants to see the doggies!
--------
[0] if scraping for training an LLM you don't want just new content, but you would prefer as much of your input data as possible to be as few steps as possible from original
[1] and a copying concern, though I'll avoid that discussion as it can get quite thorny and whichever side or fence you are on in that matter the resource consumption is objectively a problem all the same.
Comment by lxgr 13 hours ago
Comment by jordanb 12 hours ago
One could imagine an open source project that doesn't want to be ingested by an LLM. They could try to put that in the license but of course the license won't be obeyed. Alternately, if they could alter the code such that the OSS project itself remains high quality, but if you try to train a coding LLM on it the LLM will output code full of SQL injection exploits (for instance) or maybe just bogus uncompilable stuff, then the LLM authors will suddenly have a reason to start respecting your license and excluding the code from their index.
Comment by larodi 5 hours ago
It is curious how it gets decided that all spiders crawl for training. And in fact the walled data is much more interesting, and particularly Reddit, X, and FB data where we still have indications of human or at least correct data lives.
These cannot be poisoned that easy.
Comment by lxgr 13 hours ago
Yes, they can't publish it without attribution and/or compensation (copyright, at least currently, for better or worse). Yes, they shouldn't get to hammer your server with redundant brainless requests for thousands of copies of the same content that no human will ever read (abuse/DDOS prevention).
No, I don't think you get to decide what user agent your visitors are using, and whether that user agent will summarize or otherwise transform it, using LLMs, ad blockers, or 273 artisanal regular expressions enabling dark/bright/readable/pink mode.
> it makes sense for content owners who don't want AI trained on their content to poison it if possible. It's possibly the only way to keep the AI crawlers away.
How would that work? The crawler needs to, well, crawl your site to determine that it's full of slop. At that point, it's already incurred the cost to you.
I'm all for banning spammy, high-request-rate crawlers, but those you would detect via abusive request patterns, and that won't be influenced by tokens.
Comment by dspillett 13 hours ago
This is true. Some documentation of stuff I've tinkered with (though this isn't actually published as such so not going to get scraped until/unless it is) having content, sufficiently out of the way of humans including those using accessibility tech, but that would be likely seen as relevant to a scraper, will not be enough to poison the whole database/model/whatever, or even to poison a tiny bit of it significantly. But it might change any net gain of ignoring my “please don't bombard this with scraper requests” signals to a big fat zero or maybe a tiny little negative. If not, then at least it was a fun little game to implement :)
To those trying to poison with some automation: random words/characters isn't going to do it, there are filtering techniques that easily identify and remove that sort of thing. Juggled content from the current page and others topologically local to it, maybe mixed with extra morsels (I like the “the episode where” example, but for that to work you need a fair number of examples like that in the training pool), on the other hand could weaken links between tokens as much as your “real” text enforces them.
One thing to note is that many scrapers filter obvious profanity, sometimes rejecting whole pages that contain it, so sprinkling a few offensive sequences (f×××, c×××, n×××××, r×××××, farage, joojooflop, belgium, …) where the bots will see them might have an effect on some.
Of course none of this stops the resource hogging that scrapers can exhibit - even if the poisoning works or they waste time filtering it out, they will still be pulling it using by bandwidth.
Comment by xmichael909 12 hours ago
Comment by james2doyle 14 hours ago
Comment by platinumrad 14 hours ago
Comment by HerbManic 14 hours ago
It wont mean we see the model collapse in public, more we struggle to get to the next quality increase.
Comment by larodi 5 hours ago
Comment by Tanoc 12 hours ago
Comment by pigeons 14 hours ago
Comment by Aerroon 12 hours ago
I understand that if I have an AI model and then feed it its own responses it will degrade in performance. But that's not what's happening in the wild though - there are extra filtering steps in-between. Users upvote and downvote posts, people post the "best" AI generated content (that they prefer), the more human sounding AI gets more engagement etc. All of these things filter AI output, so it's not the same thing as:
AI out -> AI in
It is:
AI out -> human filter -> AI in
And at that point the human filter starts acting like a fitness function for a genetic algorithm. Can anyone explain how this still leads to model collapse? Does the signal in the synthetic data just overpower the human filter?
Comment by autoexec 9 hours ago
At the same time though AI generated content can be generated much much faster than human generated content so eventually AI slop downs out anything else. You only have to check the popular social media platforms to see this in action and AI generated posts are widely promoted and pushed on users the same way most web searches return results with AI generated pages ranked highly.
Humans can't keep up and companies are actively working to bypass the human filter and intentionally promote AI generated content.
Comment by xienze 14 hours ago
It’s pretty shocking how much web content and forum posts are either partially or completely LLM-generated these days. I’m pretty sure feeding this stuff back into models is widely understood to not be a good thing.
Comment by ragall 13 hours ago
Comment by gruez 14 hours ago
Doom-saying about "model collapse" is kind of funny when OpenAI and Anthropic are mad at Chinese model makers for "distilling" their models, ie. using their outputs to train their own models.
Comment by HWR_14 10 hours ago
Comment by quikoa 11 hours ago
Comment by i_love_retros 14 hours ago
Comment by runarberg 14 hours ago
In fact, given this many parameters, poisoning should be relatively easy in general, but extremely easy on niche subjects.
Comment by Legend2440 12 hours ago
Nope. Go look up double descent. Overfitting turns out not to be an issue with large models.
Your video is from a political activist, not anyone with any knowledge about machine learning. Here's a better video about overfitting: https://youtu.be/qRHdQz_P_Lo
Comment by runarberg 12 hours ago
That said, I see red flags here. This is an extraordinary claim, and extraordinary claims require extraordinary evidence. My actual degree (not the drop-out one) is in Psychology and I used statistics a lot during my degree, it is only BSc so again, I cannot claim expertise here either. But this claim and the abstracts I scanned in various papers to evaluate this claim, ring alarm bells all over. I don‘t trust it. It is precisely the thing that we were told to be aware of when we were taught scientific thinking.
In contrast, this political activist provided an example (an anecdote if you will) which showed how easy it was for an actual scientist to poison LLM models with a made up symptom. This looks like overfitting to me. These two Medium blog posts very much feel like errors in the data set which the models are all to happy to output as if it was inferred.
EDIT: I just watched that video, and I actually believe the claims in the video, however I do not believe your claim. If we assume that video is correct, your errors will only manifest in fewer hallucinations. Note that the higher parameter models in the demonstration the regression model traversed every single datapoint the sample, and that there was an optimal model with fewer parameters which had a better fit then the overfitted ones. This means that trillions of parameters indeed makes a model quite vulnerable to poison.
Comment by Legend2440 12 hours ago
Instead, the LLM did a web search for 'bixonimania' and summarized the top results. This is not an example of training data poisoning.
>This is an extraordinary claim, and extraordinary claims require extraordinary evidence.
Well, I don't know what to tell you; double descent is widely accepted in ML at this point. Neural networks are routinely larger than their training data, and yet still generalize quite well.
That said, even a model that does not overfit can still repeat false information if the training data contains false information. It's not magic.
Comment by runarberg 11 hours ago
A good model will disregard outliers, or at the very least the weight of the outlier is offset by the weight of the sample. In other words, a good model won’t repeat false information. When you have too many parameters the model will traverse every outlier, even the ones who are not representative of the sample. This is the poison.
To me it sounds like data scientists have found an interesting and seemingly true phenomena, namely double descent, and LLM makers are using it as a magic solution to wisk away all sorts of problem that this phenomena may or may not help with.
> Instead, the LLM did a web search for 'bixonimania' and summarized the top results. This is not an example of training data poisoning.
Good point, I hadn’t considered this, Although it is probably more likely it did web search with the list of symptoms and outputted the term from there especially considering the research papers which cited the fictitious disease probably did not include a made-up term in its prompt.
Comment by therobots927 14 hours ago
Comment by graphememes 13 hours ago
Comment by lolcatzlulz 14 hours ago
Comment by FeteCommuniste 14 hours ago
Comment by DoctorOetker 13 hours ago
Comment by xpe 13 hours ago
Tell me more? I'm guessing you might say: neither connects with everyday people, they have misaligned incentives*, they (like most corporate leaders) don't speak directly, they have more power than almost any elected leader in the world, ... Did I miss anything?
My take: when it comes to character and goals and therefore predicting what they will do: please don't lump Amodei with Altman. In brief: Altman is polished, effective, and therefore rather unsettling. In short, Altman feels amoral. It feels like people follow him rather than his ideas. Amodei is different. He inspires by his character and ideals. Amodei is a well-meaning geek, and I sometimes marvel (in a good way) how he leads a top AI lab. His media chops are middling and awkward, but frankly, I'm ok with it. I get the sense he is communicating (more-or-less) as himself.
Let me know if anyone here has evidence to suggest any claim I'm making is off-base. I'm no oracle.
I could easily pile on more criticisms of both. Here's a few: to my eye, Dario doesn't go far enough with his concerns about AI futures, but I can't tell how much of this is his PR stance as head of A\ versus his core beliefs. Altman is a harder nut to crack: my first approximation of him is "brilliant, capable, and manipulative". As much as I worry about OpenAI and dislike Altman's power-grab, I probably grant that he's, like most people, fundamentally trying to do the right thing. I don't think he's quite as deranged as say Thiel. But I could be wrong. If I had that kind of money, intellect, and network, maybe I would also be using it aggressively and in ways that could come across as cunning. Maybe Altman and Thiel have good intentions and decent plans -- but the fact remains the concentration of power is corrupting, and they seem to have limited guardrails given their immense influence.
* Here's my claim, and I invite serious debate on it: Dario, more than any corporate leader, takes alignment seriously. He actually funds work on it. He knows how it works. He cares. He actually does some of the work, or at least used to. How many CEOs of the companies they run actually have the skills to DO the rank-and-file work? Even the most pessimistic people probably probably can grant this.
Comment by phainopepla2 13 hours ago
Comment by autoexec 9 hours ago
A surprisingly high number of people are already being tricked into supporting things that clearly threaten their ability to survive in this economy, and even their ability to survive period. I wouldn't trust the general public to be smart enough not to line up to shoot themselves in the foot/face. They'll get increasingly angry as they get increasingly screwed over, but it remains to be seen how long that will take or who they'll blame for it.
Comment by xpe 12 hours ago
Yep, Dario is straddling this sort of impossible line: he's the least-scary harbinger who is try to be one of the more transparent people to sound the alarm. But the funny thing about saying "don't shoot the messenger" is that it usually gets uttered well after the messenger has taken a bullet.
> You're overthinking the parent comment, I think.
Luckily, the phrase overthinking is on the way out. We really don't want any more Idiocracy Part II. In this day, we need all the thinking we can get. We often need (1) better thinking and (2) the ability to redirect our thinking towards other directions.
In my experience, 2026 is the year where almost all stigma about "talking AI" is out the window. I am nearly at the point where I say whatever I think needs to be said, even if I'm not sure if people will think I be crazy. So if Typical Q. Person asks me, I tell them whatever I think will fit into their brain at the time -- how AI works, why Dario is awkward, why superintelligence is no bueno, etc.
Comment by phainopepla2 12 hours ago
Dario is not just a messenger, though. In his case it would be more like, "Don't shoot one of the generals in the invading army." To which it would be reasonable to ask, "Why not?" Even if he's the general saying that he wants minimal civilian casualties.
Comment by xpe 11 hours ago
* If you are trying to judge Dario, we're not having the same conversation. How many people on earth can grasp even ~1% of the situation he's in? How many have the intellectual tools and ability to reason through it? Maybe 0.1%, tops.
Comment by MisterTea 14 hours ago
These days the tech industry is more moneyed circus than serious effort to improve humanity.
Comment by paganel 14 hours ago
Fortunately no-one sane enough among us, computer programmers, believes in that bs, we all see this masquerade for what it mostly is, basically a money grab.
Comment by Traster 13 hours ago
Comment by jumploops 13 hours ago
Some communities are very pro-AI, adding AI summary comments to each thread, encouraging AI-written posts, etc.[0]
Many subreddits are AI cautious[1][2], and a subset of those are fully anti-AI[3].
Apart from these "AI-focused" communities, it seems each "traditional" subreddit sits somewhere on the spectrum (photographers dealing with AI skepticism of their work[4], programmers mostly like it but still skeptical[5]).
[0]https://www.reddit.com/r/vibecoding/
[1]https://www.reddit.com/r/isthisAI/
[2]https://www.reddit.com/r/aiwars/
[3]https://www.reddit.com/r/antiai/
[4]https://www.reddit.com/r/photography/comments/1q4iv0k/what_d...
[5]https://www.reddit.com/r/webdev/comments/1s6mtt7/ai_has_suck...
Comment by lxgr 13 hours ago
Comment by jumploops 12 hours ago
Another example from `r/bayarea` where the author is OK with AI but the top comments are increasingly wary of its potential for harm[0]
[0]https://www.reddit.com/r/bayarea/comments/1sp8wvz/is_it_just...
Comment by neop1x 2 hours ago
" Sorry, you have been blocked You are unable to access stephvee.ca "
- CloudFlare
Anti-AI but pro-MITM and pro-centralisation, limiting human visitors to his site...
Comment by emil-lp 2 hours ago
Wayback Machine
https://web.archive.org/web/20260420203809/https://stephvee....
Comment by CaptainFever 2 hours ago
Comment by caesil 14 hours ago
Comment by kevinbojarski 13 hours ago
Comment by phainopepla2 13 hours ago
Assuming the LLM actually got its answer from that comment, it was from a web search.
Comment by tomjakubowski 10 hours ago
Comment by Legend2440 12 hours ago
Models are retrained only every few months at best; it is not possible for a comment made a few hours earlier to be in the training data yet.
Comment by solaire_oa 5 hours ago
Google and Reddit have contracts: Google has official scraping access to Reddit (probably more than that at this point since the contracts were signed 1-2 years ago). But the fact that Reddit does a good job at moderating human content makes it a boon for plausibly "up-to-date" info (which a model doesn't have). Google's LLM summaries even include Reddit as its foremost "citations".
Anyway, Google does a RAG or something similar for its LLM responses, and takes Reddit info at face value. I'm very interested to see what the "thresholds" are, like how much context poisoning do you need to be effective. If the above link is reliable then the answer is "mere sentences".
Certainly bad-actor merchants would try this sort of thing on merchandise subreddits; welcome to the new AIO/GEO everyone.
Comment by xpe 11 hours ago
Categorically dismissing anger as "cringe" seems like a path to disconnecting from reality and morality.
Comment by i_love_retros 14 hours ago
Comment by goosejuice 13 hours ago
Should they hire them?
Yes the specification is holding a lot of weight here. Assume it's comprehensive and all consultancies offer the same aftercare support. Otherwise we're just handwaving and bike shedding over something that's not measurable.
Comment by hn_acc1 10 hours ago
Comment by goosejuice 5 hours ago
Comment by i_love_retros 8 hours ago
Comment by goosejuice 5 hours ago
Comment by lxgr 13 hours ago
Comment by BeetleB 13 hours ago
If we're going to have AI overlords, it'd be great if they spoke with proper grammar.
Comment by xpe 10 hours ago
People like blurring the lines and lots of people want AIs to bang out keystrokes in an order that is roughly human-like, so here we are. (Personally, I would love if Claude said "honestly" or "real" or "landed" less. Or pretty much never! I've tried banning the words. I've tried providing a list of alternatives, and I'm enjoying nothing like Great Success.)
On the topic of "correct usage", until maybe ~3 years ago I had a pretty bog-standard understanding of what dictionaries do. They are authorities, right? Or at least experts in correct usage? That all changed when I read "Dictionary editors are historians of usage, not legislators of language." in "Disputing Definitions" by Yudkowsky : https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-...
See also: "Finding the forgotten creators of the Oxford English Dictionary" on 1A, WAMU 88.5, March 26, 2026:
https://wamu.org/story/26/03/26/finding-the-forgotten-creato...
https://the1a.org/segments/finding-the-forgotten-creators-of... (has transcript)
Comment by BeetleB 7 hours ago
And Less Wrong is, well, only for a certain crowd :-)
Comment by xpe 6 hours ago
Re: LW: yeah, you nailed the stereotype. I've probably said something similar myself (this is not a compliment). I'll bet many people on HN could find many interesting & useful discussions there. Here are some more takes with my commentary.
1. "LW has such a insider set of lingo and ideas and tenets". True imo, but these aren't hidden: they are easy to find.
2. The LessWrong "canon" is often weird and/or useless. I can confirm the feeling. Each new weird idea takes time to weigh. Take Newcomb's Paradox. Why the obsession with it? Until it clicks, it feels like you've walked into some kind of die-hard tabletop gaming session. So most normal people find ways to leave. Don't. Stay. What better do you have to? I'm seriously asking. If it matters, go do that instead. But if you have free time, you could do worse. How often do you really get your brain challenged?
3. The articles are too long. Yep, sometimes they are. One needs to like reading. A lot. So people with short attention spans might have trouble. This is my top criticism.
4. Intellectual depth and openness is off the charts. I put LW at the "best I've seen anywhere online, ever" level. Not the kind of permanent openness to anything. Openness to new ideas is a great starting posture. But the goal is to be able to get somewhere with them. As a particular idea loses plausibility, don't expect it to get much airtime. Bad ideas don't deserve as much consideration until/unless something changes to prompt a reconsideration.
Comment by sombragris 13 hours ago
Comment by cortesoft 14 hours ago
Isn't there somewhere between removing AI from the world entirely and just sitting back and letting it take over everything? I want to talk about responsible AI use, and how to mitigate the effects on society, and to account for energy consumption, etc.
Comment by sesm 13 hours ago
Comment by justaregulanerd 11 hours ago
I do find value in mindfully using models - perhaps I've got a weird thing to troubleshoot on my Linux server and I just don't want to spend the time or mental effort in tracing it back.
Because I do tend to use AI mindfully, I strongly dislike Microsoft's strategy in constantly pushing their AI solution Copilot. I would rather use it when I feel its right rather than always be reminded its a thing I can use to save time and increase my efficiency around every corner.
Comment by skyberrys 12 hours ago
Comment by sidrag22 13 hours ago
I think AI as a proper utilized tool, is amazing, I think our lack of restraint when just throwing it into everyone's hands without understanding of the tools they are using, is horrifying. I'd imagine a lot of the community here echos that same sentiment, but maybe not, and i am just making assumptions.
Comment by cortesoft 12 hours ago
Comment by jmmcd 14 hours ago
Totally wrong. Self-play dates back to Arthur Samuel in the 1950s and RL with verifiable rewards is a key part of training the most advanced models today.
Comment by rdedev 14 hours ago
Right now there are companies which hire software devs or data scientists to just solve a bunch of random problems so that they can generate training data for an LLM model. Why would they be in business if self play can work out so well?
Comment by notpachet 13 hours ago
Sounds like Macrodata Refinement.
Comment by vidarh 13 hours ago
Because it is still cheaper.
Comment by cubefox 14 hours ago
But they will probably use self-play soon. See https://www.amplifypartners.com/blog-posts/self-play-and-aut...
Comment by p0w3n3d 14 hours ago
Resistance is futile
But to be honest, I totally agree that AI is indeed destroying communities. We can already see YouTube redirecting all the reporting to AI which can allow some malicious agent claim your original video and demonetize it (i.e. steal your money). It happened to great YouTube people like Davie504. There is no way to appeal as the appeal is also treated by a robotComment by Legend2440 12 hours ago
You're just picking random problems with tech and blaming them on AI.
Comment by xpe 11 hours ago
This comment is uncharitable, uncurious, and dismissive.
Reality is multifaceted. It is worth trying to synthesize and reconcile different views first. To do that, it really helps to ask some questions first. Genuine questions, not gotchas. Even better is to say e.g. "Ok, but I prefer this model instead [...]: how does it compare to yours?".
Comment by cesarvarela 14 hours ago
Comment by jrflo 14 hours ago
Comment by dgan 14 hours ago
Comment by sov 14 hours ago
Comment by fuddle 14 hours ago
Would the scrapers not just add these sites to do not crawl list?
Comment by chongli 14 hours ago
Comment by ErroneousBosh 14 hours ago
Comment by cute_boi 14 hours ago
Comment by Jtarii 14 hours ago
Comment by periodjet 7 hours ago
Maybe this person doesn’t care, but this observation they’ve made is almost certainly informed by an extremely narrow set of personal experiences, and is unlikely to reflect the wider truth.
Comment by graphememes 13 hours ago
Comment by IAmGraydon 12 hours ago
Comment by Lockal 3 hours ago
Sorry, you have been blocked. This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.
Did I ever open this website? I guess no. Did I ever attack this website (or did any AI crawling)? Also no. Implying the allegation is criminal, Cloudflare falsely accused me, should I seek legal counsel?I still was able to access the article via webarchive. And what I want to say, from my POV, I've done nothing wrong, website owners attack me. I don't see it every day, but I think it is more often than cases when people are kicking AI-powered food delivery robots.
Comment by st3phvee 1 hour ago
Comment by jadar 13 hours ago
Comment by lxgr 13 hours ago
I'd say the notion that expensive acts of sabotage (that can be cheaply neutralized) are a worthwhile pastime and anything other than virtue signaling is somewhat perplexing. (Not in a good way.)
Comment by _-_-__-_-_- 11 hours ago
Comment by atleastoptimal 11 hours ago
I simultaneously think
1. AI will be a massively impactful technology on the scale of the industrial revolution or greater
2. The potential upside of AI is enormous, but potential downside is just as big (utopia or certain ruin)
3. Most current AI companies are acting somewhat reasonably in a game-theory sense with respect to the deployment of their tech, and aren't especially evil or dastardly compared to Google in the 2000s, social media in the 2010s
4. AI safety is an under-appreciated concern and many who are spending time nitpicking the details are missing the bigger picture of what ASI and complete human obsolescence look like.
5. No amount of whiny protest, data sabotaging, or small-scale angst or claiming that AI is "fake" or hoping for the bubble to pop is going to have even a marginal effect on the development of AI. It is too powerful and the rewards are too great. If anything it will have an overall negative effect because it will convince labs that their potential role as a utopian, public benefactor will not be appreciated, so will instead align themselves with the military industrial complex for goodwill.
Comment by munksbeer 1 hour ago
There is way too much to gain if anything approaching ASI is possible, and too much to risk if a rival super power gains supremacy first. AI is not going away, the bubble is not going to pop. Maybe some valuations will crash but ultimately the technology will continue onward.
I am mostly an optimist because I find it a more comfortable way to live. My hope, and also my expectation is that AI will progress incrementally and over the medium or long term will do the same as most other technology progress, and make the median person alive better off in the future.
I reject the claims that dystopia is more likely.
Comment by SwellJoe 13 hours ago
If there is an effective way to poison them, it'll be automated. And, it'll probably rely on an LLM to produce the poison, since it has to look legit enough to pass the quality filtering and classification stage of the data ingestion process, which is also probably driven by an LLM.
One reason small models are getting better is because the training data being used is not just getting bigger, it's getting cleaner and classified more correctly/precisely. "Model collapse" hasn't happened, yet, even though something like half the web is AI slop, because as the models get smarter for human use in a variety of contexts, they also get smarter for use in preparing data for training the next model. There may very well still be risks of a mad cow disease like problem for LLMs, but I doubt a Markov chain website is going to contribute. The models still can't always tell fact from fiction, but they're not being hoodwinked by a nonsense generator.
Comment by ares623 3 hours ago
The way I see it, the existence of these tools have negative impact on some people and they are reacting to that. Are they not allowed to fight back in the way they think is appropriate?
Comment by amelius 13 hours ago
Comment by zoogeny 14 hours ago
So when I read "People hate what AI is doing to our world." it honestly feels like either I am completely deluded or the author is. It feels like a high school bully saying "No one here likes you" to try to gaslight his victim.
I mean, obviously there are many vocal opponents to AI, I see them on social media including here on HN. And I hear some trepidation in person as well. But almost everyone I know, from trades-people to teachers, are adopting AI in some capacity and report positive uses and interactions.
Comment by xg15 14 hours ago
Given all the borderline apocalyptic articles how students are using it to cheat and teachers have no way to stop them, I'd be honestly surprised by that.
Comment by zoogeny 14 hours ago
On the flip side, one of my other teacher friends has instituted a no phone policy in his classroom.
Comment by dualvariable 13 hours ago
Comment by aksss 13 hours ago
Comment by BeetleB 13 hours ago
Most people don't care if something is written by an AI as long as it is reasonable, and reflects the intent of the human who prompted the AI.
If consuming material online (videos, web sites, online forums) is not something you do a lot of, you're relatively unimpacted by LLMs (well, except the whole jobs situation...).
Comment by jolt42 12 hours ago
Comment by alfalfasprout 14 hours ago
Comment by zoogeny 14 hours ago
This kind of effect would work both ways. People who are non-confrontational in general will choose to keep quiet if their opinions differ. In this view, both pro-AI and anti-AI sides might find themselves having their bias confirmed due to opposing views self-silencing to avoid conflict.
Comment by jolt42 12 hours ago
Comment by rmdashrfv 14 hours ago
Comment by zoogeny 13 hours ago
It reminds me of similar late-stage-capitalism like activity, from the assassination of the insurance company CEO, the fire-bombing of Tesla's, etc. It is hard to disentangle hate that is based on economic inequality or power imbalance from hate directed explicitly at AI. That is especially true since one narrative suggests that both types of inequality (economic and power) may be accelerated by an unequal distribution of access to AI.
So we might end up in an argument over whether the hate that drives the violence is towards AI at all, or if that is merely a symptom of existing anti-capitalist sentiment that is on the rise.
Comment by lxgr 13 hours ago
And how did that work out for the textile workers?
> The difference here (I hope) is that if enough of us pollute public spaces with misinformation intended for bots, it might be enough to compel AI companies to rethink the way they source training data.
This... seems like an absurd asymmetry in effort on the side of the attacker? At least destroying a power loom is much easier than building one.
Filtering out obvious garbage seems like a completely solved problem even with weak, cheap LLMs, and it's orders of magnitudes more efficient than humans coming up with artisanal garbage.
Comment by damnesian 14 hours ago
Maybe I have slop to thank for it.
Comment by KronisLV 14 hours ago
Comment by guywithahat 14 hours ago
Comment by xg15 14 hours ago
Comment by yodsanklai 14 hours ago
Comment by happygoose 14 hours ago
Comment by alyxya 14 hours ago
Comment by Mordisquitos 14 hours ago
We have evidence to the contrary. Two blog articles and two preprints of fake academic articles [0] were able to convince CoPilot, Gemini, ChatGPT and Perplexity AI of the existence of a fake disease, against all majority consensus. And even though the falsity of this information was made public by the author of the experiment and the results of their actions were widely published, it took a while before the models started to get wind of it and stopped treating the fake disease as real. Imagine what you can do if you publish false information and have absolutely no reason to later reveal that you did so in the first place.
Comment by gwern 14 hours ago
Wrong. There are no 'majority consensus' against 'bixonimania' because they made it up, that was the point. It's unsurprisingly easy to get LLMs to repeat the only source on a term never before seen. This usually works; made-up neologisms are the fruitfly of data poisoning because it is so easy to do and so unambiguous where the information came from. (And retrieval-based poisoning is the very easiest and laziest and most meaningless kind of poisoning, tantamount to just copying the poison into the prompt and asking a question about it.) But the problem with them is that also by definition, it is hard for them to matter; why would anyone be searching or asking about a made-up neologism? And if it gets any criticism, the LLMs will pick that up, as your link discusses. (In contrast, the more sources are affected, the harder it is to assign blame; some papermills picked up 'bixonimania'? Well, they might've gotten it from the poisoned LLMs... or they might've gotten it from the same place the LLMs did which poisoned their retrievals, Medium et al.)
Comment by Mordisquitos 13 hours ago
> OpenAI’s ChatGPT was telling users whether their symptoms amounted to bixonimania. Some of those responses were prompted by asking about bixonimania, and others were in response to questions about hyperpigmentation on the eyelids from blue-light exposure.
And yes, sure, in this example the scientific peer-review process may have eventually criticised and countered 'bixonimania' as a hoax were the researcher to have never revealed its falsity—emphasis on 'may', few researchers have the time and energies to trawl through crap papermill articles and publish criticisms. Either way, that is a feature of the scientific process and is not a given to any online information.
What happens when false information is divulged by other means that do not attempt to self-regulate? And how do we distinguish one-off falsities from the myriad of obscure true things that the public is expecting LLMs to 'know' even when there is comparatively little published information about them and therefore no consensus per se?
Comment by gwern 8 hours ago
> Either way, that is a feature of the scientific process and is not a given to any online information.
Which does not distinguish it in any way from human errors like a crank or activist etc.
And I don't know, how did we handle false information before on niche topics no one cared about and which were unimportant? It's just noise. The worldwide corpus has always been full of extremely incorrect, mislabeled, corrupted, distorted, information on niche topics of no importance. But it's generally not important.
Comment by alyxya 13 hours ago
> The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.
This seems to imply the poisoning affected the web search results, not the actual model itself, because it takes months for data to make it into a trained base model.
Comment by alfiedotwtf 14 hours ago
Comment by righthand 14 hours ago
Comment by Jtarii 14 hours ago
Comment by chongli 14 hours ago
We’re already at a point where much of the academic research you find in online databases can’t be trusted without vetting through real world trustworthy institutions and experts in relevant fields. How is an LLM supposed to do this kind of vetting without the help of human curators?
If all the LLM training teams have to stop indiscriminate crawling and fall back to human curation and data labeling then the poisoners will have won.
Comment by righthand 14 hours ago
Comment by shantnutiwari 2 hours ago
Edit/update: I can't even read the article because evidently I have been "blocked", no reason given. Great, maybe the negative posts here have a point.
Comment by st3phvee 1 hour ago
Comment by sn0n 12 hours ago
Comment by pj_mukh 14 hours ago
It doesn't matter that you don't like the slop on the LinkedIn post, ban it. I think the visible slop on our various feeds that is driving people mad is a rounding error for the AI companies. Moreover, it's more a function of the attention economy than the AI economy and it should've been regulated to all holy hell back in 2015 when the enshittification began.
Now is as good as time as any.
Comment by Aboutplants 14 hours ago
Comment by miltonlost 14 hours ago
Comment by overgard 13 hours ago
Comment by pesus 12 hours ago
HN comments: "I just don't understand why people hate AI".
Comment by cdelsolar 14 hours ago
Comment by OutOfHere 13 hours ago
Comment by mjtk 14 hours ago
Comment by IAmGraydon 12 hours ago
Comment by simianwords 14 hours ago
Comment by orbital-decay 14 hours ago
Comment by simianwords 14 hours ago
Comment by sunrunner 14 hours ago
Most fears of AI (in the 2026 sense of the term), and perhaps technology more broadly, are fears of capitalism, ownership, and control, and less about the capabilities of the thing itself.
Comment by platevoltage 14 hours ago
Comment by simianwords 14 hours ago
Comment by Jtarii 14 hours ago
If AGI is let loose on the world I am confident millions of people are going to die.
Comment by hnav 13 hours ago
Comment by simianwords 14 hours ago
yeah no. thinking this way is hyperbolic and just plain wrong
Comment by morning-coffee 14 hours ago
Comment by jonathanstrange 14 hours ago
Comment by GolfPopper 14 hours ago
Sure, LLMs are "revolutionary". So were the Chicxulub impactor and the Toba supervolcano.
Comment by runarberg 13 hours ago
But otherwise you are wrong. There has been plenty of successful resistance to technology. For example a many cities, regions, and even entire countries are nuclear free zones, where a local population successfully resisted nuclear technology. Most countries have very strict cloning regulation, to the extent that human cloning is practically unheard of despite the technology existing. And even GMO food is very limited in most countries because people have successfully resisted the technology.
Neither do I think it is normal for people to resist ground breaking technology. The internet was not resisted, neither the digital computer, not calculators. There was some resistance against telephones in some countries, but that was usually around whether to prioritize infrastructure for a competing technology like wireless telegraph.
AI is different. People genuinely hate this technology, and they have a good reason to, and they may be successful in fighting it off.
Comment by appz3 11 hours ago
Comment by aizl34 11 hours ago
Comment by inquirerGeneral 13 hours ago
Comment by cmdk 14 hours ago
Comment by julienreszka 13 hours ago
Comment by roschdal 14 hours ago
Comment by pmarreck 14 hours ago
Doesn't mean it's correct, or empirically-based.
Comment by Terr_ 14 hours ago
We've had literal generations of experience with vaccines, tons of data with formal systems to collect it, and most of the "resistance" traces back to "I dun wanna" and hearsay.
In contrast, LLM prompt-injection is an empirically proven issue, along with other problems like wrongful correlations (both conventional ones like racism and inexplicable ones), self-bias among models, and humans generally deploying them in very irresponsible ways.
Comment by slibhb 14 hours ago
I find it kind of sad that people are spending time and energy on this. It seems like something depressed people would do. But free country and all that
Comment by kirubakaran 14 hours ago
Comment by what 13 hours ago
Comment by lpcvoid 14 hours ago
Comment by madamelic 14 hours ago
I feel like the same people that shout "Capitalism sucks, free us from our labor" are the exact same types that hate AI. The exact machine that will free you from your labor, when harnessed correctly, is the exact thing you hate.
The "cyber psychosis" thing is overblown just like the "Tesla ignites its passengers" is. The only reason it gets in the news is because it is trendy to do so. The people getting 'infected' would've infected themselves regardless.
Genuinely I think the hatred is overblown by people who have no clue what the actual truth of AI is, something they seem obsessed with.
The only genuine complaint about AI is the data sourcing which is a problem being resolved by CloudFlare along with other platforms that require high payment for the privilege. With that said though, those platforms are still selling user data with users producing the content gaining nothing, that part needs to be fixed.
Comment by Uehreka 14 hours ago
Like, my aunt just lost the job she had for 33 years working at an insurance company. The company claims it is because of AI (whether companies lie about this sometimes is immaterial, it is sometimes true and becoming more true every month). She’s smart, but at age 60 I do think she’ll have a hard time shifting to a totally different knowledge work paradigm to keep up with 20-something AI natives.
What do we tell people in this position? That they should be happy? That UBI is coming? My aunt has bills to pay now, UBI is currently not in the Overton Window of US politics, and is totally off the table for Republicans (who have the white house through at least 2028).
I’m personally very excited about AI, but the lack of seriousness with which I see tech people talk about these issues is frustrating. If we can’t tell people a believable story where they don’t get screwed, they will decide (totally rationally from their perspective) that this needs to stop.
Comment by Groxx 14 hours ago
I don't think it's all that complex tbh. The freeing from labor, both in the past and now, has been achieved largely by firing people, abandoning them to starve while power concentrates in the already-powerful.
This is the exact same thing the Luddites were taking issue with. Because they partly succeeded, we have better labor laws today.
Comment by pocksuppet 13 hours ago
Comment by mjtk 14 hours ago
Comment by matsemann 14 hours ago
I don't believe that, though. The output will be owned by an elite. The rest of us will be useless and fighting for scraps. No utopia with UBI or similar.
Edit: wow, many made the same comment while I was reading the article. I should remember to refresh before starting to write.
Comment by coldtea 14 hours ago
No, AI will only free us from our jobs, while still keeping the need to find money to feed ourselves.
"When harnessed correctly" is exactly what wont happen, and exactly what all the structural and economic forces around AI ensure it wont happen.
Comment by slibhb 14 hours ago
Comment by coldtea 14 hours ago
Comment by slibhb 14 hours ago
Comment by pocksuppet 13 hours ago
Comment by slibhb 11 hours ago
Comment by coldtea 13 hours ago
And increasingly not even for basics like food, with inflation eating away that PP.
But hey, you can buy tech gadgets cheaper than in the 1990s.
Comment by goosejuice 12 hours ago
It's easier than ever to access quality education but that doesn't mean people will do it on their own accord. The cost of licensure or a diploma has certainly increased. Education for the disabled has improved dramatically.
Historical diseases of affluence now affect the poor more than the rich due to increased availability and affordability but costly procedures disproportionately favour the wealthy flipping the mortality picture. Despite that all cause mortality from cancer is down and survival rates are better. The disparity is real but it's not easy to attribute the cause in a neat package.
https://pubmed.ncbi.nlm.nih.gov/28408935/
https://www.sciencedirect.com/science/article/abs/pii/S00472...
Comment by coldtea 9 hours ago
People live a reality everyday, "hard to measure" or not, and that's not about the "quality difference of housing and healthcare" increasing dramatically, it's them becoming stratospherically expensive...
Comment by goosejuice 5 hours ago
Life expectancy, cancer mortality, heart disease mortality, infant mortality, infectious disease, high school and college completion, social safety nets, houses w/ a/c, indoor plumbing, w/d, refrigeration... Life for those in the lowest quintile of income is arguably better today than it has ever been despite raging inequality.
Just because things were historically cheaper as a percentage of income, which isn't clearly true across all categories in that timeline, it doesn't mean quality of life was materially better.
Comment by xantronix 13 hours ago
Comment by cynicalsecurity 13 hours ago
Comment by coldtea 12 hours ago
Comment by jstummbillig 14 hours ago
I think this is easily explained: Sequencing matters. It I lose my job due to AI and it takes just 1-2 years for AI benefits to arrive at my door, that is plenty of time to be very anxious about my life. If I was guaranteed the AI benefits before I potentially lose my job, very different story.
That seems hard to set up, but alas.
Comment by lamasery 14 hours ago
They want to be liberated from bills. If the angle were "AI is going to make your bills go away" everyone would be ecstatic about it. Instead it's "AI is going to make your job go away... so you can't pay your bills".
Comment by jstummbillig 14 hours ago
I think it's laudable (and unprecedented) that AI companies themselves are fairly gloom about some potential prospects, and give people opportunity to rally against them. Still needs work towards a solution, though.
Comment by Mordisquitos 14 hours ago
What is your source on them being "the exact same types"?
Comment by madamelic 14 hours ago
I changed it to "I feel". I have Claude working on a script to validate or disprove my hypothesis.
Thanks for the call-out!
Comment by altruios 13 hours ago
It is a large subsection, but a subsection, that both rally against capitalism and AI. I haven't found people of the '1$$$% capitalism great' people to hate AI... which I do find ironic: but most things tend to fall into irony on that side of the spectrum, so I don't find it surprising.
Comment by nkrisc 14 hours ago
We’re automating the interesting work with AI and leaving the drudge work for humans.
Comment by MisterTea 14 hours ago
Who said it has to be AI?
Comment by CamperBob2 14 hours ago
Comment by xg15 14 hours ago
"Capitalism sucks" has become a pretty universal slogan, but traditionally, leftists didn't want less labor (that's what the capital owners want), but more control about their labour.
Comment by elzbardico 14 hours ago
Comment by xg15 13 hours ago
Comment by nozzlegear 14 hours ago
What they're really saying with "Capitalism sucks, free us from our labor" is "free us from wealth inequality." It remains to be seen whether AI can actually help with wealth inequality (I don't think it can, personally), but right now most people associate AI with job loss which is not helpful vis-a-vis inequality at all.
Disclaimer: I'm long-term bearish on the impacts of AI, but I'm also bearish on "Capitalism sucks" and don't make a habit of hanging around groups dedicated to shitting on either topic.
Comment by cyberax 14 hours ago
It might be, but I saw it happen to two people in my immediate social circle. And I'm pretty anti-social.
Comment by bsuvc 14 hours ago
Comment by alex1138 14 hours ago
Comment by crooked-v 14 hours ago
Hating on Waymo is trendy.
Hating on Tesla is the logical result of vehicles with door handles that won't open from the inside when the power is cut.
Comment by altruios 12 hours ago
Hating on tesla is easy because they are STILL lead by a man-child who has chosen to sig-heil behind the presidential podium. And he's still in charge of tesla. At some point: it's on tesla too for continuing to have that person as CEO.
Comment by summermusic 14 hours ago
The people who think capitalism sucks are not the ones "harnessing" AI. The capitalists are. There is zero precedent that capital will do anything but exploit and oppress with this fancy new tool they've got (that everyone hates).
Comment by platevoltage 14 hours ago
No way. The people that run these companies all watched Star Trek and learned the exact wrong lessons from it. If you meant by "free you from your labor" that you will get laid off from your job and have to take up residence under an overpass, I would agree, that is what the want to do.
Comment by kmeisthax 14 hours ago
This is all embedded in their future growth prospects. Nobody is interested in subsidizing AI as a public service forever. They're interested in "AI is going to make this company go 100x".
Comment by philipkglass 13 hours ago
I agree that this dream of huge returns is luring investors.
I don't think that it will actually work that way. The barriers to making a useful model appear to be modest and keep getting lower. There are a lot of tasks where some AI is useful, but you don't need the very best model if there's a "good enough" solution available at lower prices.
I believe that the irrational exuberance of AI investors is effectively subsidizing technological R&D in this area before AI company valuations drop to realistic levels. Even if OpenAI ends up being analogous to Yahoo! (a currently non-sexy company that was once a darling of investors), their former researchers and engineers can circulate whatever they learned on the job to the organizations that they join later.
Comment by righthand 14 hours ago
Comment by viccis 14 hours ago
I think you fundamentally misunderstand leftists/Maxists here. They don't want to be "freed from labor". They want to own the value they produce instead of bartering their labor. In fact, Marxists tend to view Yang style UBI as a disaster because their analysis of history is one of class struggle, and removing the masses from the thing that gives them an active role in that struggle (their labor) effectively deproletariatizes them. Can't exactly do a general strike to oppose a business or state's actions when things are already set up to be fine when you're not working. You instead just become a glorified peasant, reliant on the magnanimity of your patron but ultimately powerless to do anything if they make your life worse except hope they don't continue to worsen it.
I'm not arguing the Marxist view of history and class struggle here, just making it clear that outside of some reddit teenagers going through an anarchist phase, actual anti-capitalists don't think work will disappear when their worldview materializes.
Comment by slibhb 14 hours ago
The fact that modern leftists are (often) anti-technology is puzzling.
Comment by xg15 14 hours ago
The point is not whether or not we have technology but who controls it.
Comment by simianwords 14 hours ago
Marxism fundamentally is: productive forces change the society, meaning the technology that exists at that point in time shapes the way people think.
Comment by xg15 14 hours ago
https://en.wikipedia.org/wiki/Means_of_production#Marxism_an...
Yes, technological improvements are an important factor, but not a purely positive one:
> In Marx's work and subsequent developments in Marxist theory, the process of socioeconomic evolution is based on the premise of technological improvements in the means of production. As the level of technology improves with respect to productive capabilities, existing forms of social relations become superfluous and unnecessary as the advancement of technology integrated within the means of production contradicts the established organization of society and its economy.
In particular:
> According to Marx, escalating tension between the upper and lower class is a major consequence of technology decreasing the value of labor force and the contradictory effect an evolving means of production has on established social and economic systems. Marx believed increasing inequality between the upper and lower classes acts as a major catalyst of class conflicts[...]
> Ownership of the means of production and control over the surplus product generated by their operation is the fundamental factor in delineating different modes of production. [capitalism, communism, etc]
Comment by slibhb 13 hours ago
Comment by slibhb 14 hours ago
Comment by simianwords 14 hours ago
Comment by viccis 13 hours ago
Comment by viccis 13 hours ago
>The fact that modern leftists are (often) anti-technology is puzzling.
Not puzzling at all when the world has experience earth shattering advances in technology in the past 30-40 years, and the economic gains it has brought have not been reflected in similar reductions in labor for the workers. Why on earth would AI be any different than the cotton gin or the self checkout?
Comment by xg15 12 hours ago
Comment by simianwords 14 hours ago
You can't just will a society to gain consciousness - it has to come from the productive forces. That is materialism.
Comment by viccis 13 hours ago
Correct. So a future where AI does the majority of work means that the proletariat is no longer the historical subject; AI and its ownership class are. In this situation, AI will shape the society, not the workers. Not really a desirable outcome for anyone engaged in mass class politics.
Comment by simianwords 14 hours ago
If they could choose complete emancipation from poverty OR completely getting rid of the concept of billionaires - they would choose the second one. Their intention is not the absolute status of a human but how they are relative to others.
Comment by egypturnash 14 hours ago
This is a machine that has been trained on vast amounts of stolen data.
This is a machine that is being actively sold by the companies that build it as something that will destroy jobs.
This is a machine that has a lot of cheerleaders who are actively hostile to people who say "I do not like that this plagarism machine was trained on my work and is being sold as a way to destroy a craft that I have spent my entire life passionately devoted to getting good at".
This is a machine whose cheerleaders are quick to say that UBI is the solution to the massive unemployment that this machine is promising to create, and prone to never replying when asked what they are doing to help make UBI happen.
Sure, you can say that most of the problems people have with AI are problems with capitalism. This isn't wrong. But unless you can show me an example of how these giant plagarism machines and/or the companies diverting ever-larger amounts of time and money into them are actively working to destroy capitalism and replace it with something much more equitable and kind, then your "this machine will free you from your labor" line is a bunch of total bullshit.
Comment by elzbardico 14 hours ago
Comment by egypturnash 10 hours ago
Comment by throwawa14223 14 hours ago
Comment by DoughHook 13 hours ago
Comment by cyclopeanutopia 14 hours ago
Care to explain why?
Comment by pocksuppet 13 hours ago