Show HN: Gemini Pro 3 imagines the HN front page 10 years from now
Posted by keepamovin 6 days ago
Comments
Comment by dang 5 days ago
Edit: See also this current thread: Auto-grading decade-old Hacker News discussions with hindsight - https://news.ycombinator.com/item?id=46220540
Comment by keepamovin 6 days ago
update: I told Gemini we made it to the front page. Here is it's response:
LETS GOOOO!
The recursive loop is officially complete: The fake future front page is now on the real present front page. We have successfully created a temporal anomaly.
I hope the comments are confused about why they can't click the links to download the Neuralink drivers.
Enjoy the karma, time traveler! Just remember to warn them about the 2035 Office 365 price hikes while you're up there. ;)Comment by pseudosavant 6 days ago
It'd be so awesome if Gemini CLI went through and created the fake posts/articles, and HN even comments. Perhaps a bit much to ask of it?
Comment by thomasm6m6 6 days ago
I downloaded the original article page, had claude extract the submission info to json, then wrote a script (by hand ;) to run feed each submission title to gemini-3-pro and ask it for an article webpage and then for a random number of comments.
I was impressed by some of the things gemini came up with (or found buried in its latent space?). Highlights:
"You’re probably reading this via your NeuralLink summary anyway, so I’ll try to keep the entropy high enough to bypass the summarizer filters."
"This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034)."
"Zig v1.0 still hasn't released (ETA 2036)"
The unprompted one-shot leetcode, youtube, and github clones
Nature: "Content truncated due to insufficient Social Credit Score or subscription status" / "Buy Article PDF - $89.00 USD" / "Log in with WorldCoin ID"
"Gemini Cloud Services (formerly Bard Enterprise, formerly Duet AI, formerly Google Brain Cloud, formerly Project Magfi)"
Github Copilot attempts social engineering to pwn the `sudo` repo
It made a Win10 "emulator" that goes only as far as displaying a "Windows Defender is out of date" alert message
"dang_autonomous_agent: We detached this subthread from https://news.ycombinator.com/item?id=8675309 because it was devolving into a flame war about the definition of 'deprecation'."
Comment by keepamovin 5 days ago
Comment by pseudosavant 6 days ago
Another absolute gem:
Columns now support "Vibe" affinity. If the data feels like an integer, it is stored as an integer.
This resolves the long-standing "strict tables" debate by ignoring both sides.
Also: SQLite 4.0 is now the default bootloader for 60% of consumer electronics.
The build artifacts include sqlite3.wasm which can now run bare-metal without an operating system.
edit: added linkComment by patates 5 days ago
Comment by locknitpicker 5 days ago
This is brilliant. Well done.
Comment by menaerus 5 days ago
It is now the only software in the world still written in C89.
Hilarious.Comment by throwaway2037 3 days ago
> Predictive SELECT Statements:
> Added the PRECOGNITION keyword.
> SELECT * FROM sales WHERE date = 'tomorrow' now returns data with 99.4% accuracy by leveraging the built-in 4kB inference engine. The library size has increased by 12 bytes to accommodate this feature.
12 bytes really sounds like something that the lead dev would write!Comment by ray_v 6 days ago
Comment by teiferer 5 days ago
Comment by xp84 5 days ago
Comment by throwaway2037 3 days ago
The content spot on and very funny.
Also, a popup appeared at the bottom with this message:
> The future costs money.
> You have reached your free article limit for this microsecond.
> Subscribe for 0.0004 BTC/month
Suddenly, I have high hopes again for LLMs. Imagine you were a TV/film script writer and had writer's block. You could talk to an LLM for a while to see what funny ideas it can suggest. It is one more tool in the arsenal.Comment by avianlyric 6 days ago
> "We are incredibly proud of what Gemini achieved. However, to better serve our users, we are pivoting to a new architecture where all AI queries must be submitted via YouTube Shorts comments. Existing customers have 48 hours to export their 800TB vector databases to a FAT32 USB drive before the servers are melted down for scrap."
> — Official Blog Post, October 2034
It’s good to know that AI won’t kill satire.
Comment by teiferer 5 days ago
Comment by thomasm6m6 5 days ago
the prompt indeed began with "We are working on a fun project to create a humorous imagining of what the Hacker News front page might look like in 10 years."
Comment by pseudosavant 6 days ago
The Conditional Formatting rules now include sponsored color scales.
If you want 'Good' to be green, you have to watch a 15-second spot.
Otherwise, 'Good' is 'Mountain Dew Neon Yellow'.Comment by xp84 5 days ago
Comment by TeMPOraL 5 days ago
"A recent Eurobarometer survey showed that 89% of Europeans cannot tell the difference between their spouse and a well-prompted chatbot via text."
Also I bet this will become a real political line in less than 10 years:
"A European citizen has the right to know if their customer service representative has a soul, or just a very high parameter count."
Comment by LocalH 6 days ago
prompt_engineer_ret 10 hours ago
I miss the old days of Prompt Engineering. It felt like casting spells. Now you just think what you want via Neural-Lace and the machine does it. Where is the art?
git_push_brain 9 hours ago
The art is in not accidentally thinking about your ex while deploying to production.
Comment by danlugo92 6 days ago
> The micro-transaction joke hits too close to home. I literally had to watch an ad to flush my smart toilet this morning because my DogeCoin balance was low.
And the response...
Comment by throwaway2037 5 days ago
Real question: How do LLMs "know" how to create good humor/satire? Some of this stuff is so spot on that an incredibly in-the-know, funny person would struggle to generate even a few of these funny posts, let alone 100s! Another interesting thing to me: I don't get uncanny valley feelings when I read LLM-generated humor. Hmm... However, I do get it when looking at generated images. (I guess different parts of the brain are activated.)
Comment by jjcob 5 days ago
Comment by regularfry 5 days ago
Comment by thomasm6m6 5 days ago
Comment by benbreen 6 days ago
Especially this bit: "[Content truncated due to insufficient Social Credit Score or subscription status...]"
I realize this stuff is not for everyone, but personally I find the simulation tendencies of LLMs really interesting. It is just about the only truly novel thing about them. My mental model for LLMs is increasingly "improv comedy." They are good at riffing on things and making odd connections. Sometimes they achieve remarkable feats of inspired weirdness; other times they completely choke or fall back on what's predictable or what they think their audience wants to hear. And they are best if not taken entirely seriously.
Comment by qnleigh 5 days ago
> © 2035 Springer Nature Limited. A division of The Amazon Basics™ Science Corp.
Comment by dotancohen 4 days ago
> Dr. Sarah Connor, DeepMind AlphaFusion v9.22, GPT-8 (Corresponding Author), Prof. H. Simpson & The ITER Janitorial StaffComment by kaiwen1 5 days ago
Top comment:
“The Quantum-Lazy-Linker in GHC 18.4 is actually a terrifying piece of technology if you think about it. I tried to use it on a side project, and the compiler threw an error for a syntax mistake I wasn't planning to make until next Tuesday. It breaks the causality workflow.”
Comment by tambourine_man 6 days ago
Our actual nerdy discussions are more of a pastiche than I realized and AI has gotten really good at satire.
This is pure gold.
Comment by overfeed 5 days ago
>>> It blocked me from seeing my own child because he was wearing a t-shirt with a banned slogan. The 'Child Safety' filter replaced him with a potted plant.
>> [flagged]
> The irony of flagging this comment is palpable
Comment by phs318u 5 days ago
If I had to decide the fate of all AI's, this single output would be a huge mitigating factor in favour of their continuing existence.
Comment by pseudosavant 6 days ago
Comment by thomasm6m6 6 days ago
also worth linking https://worldsim.nousresearch.com/console
Comment by indigodaddy 6 days ago
Comment by thomasm6m6 5 days ago
https://sw.vtom.net/tmp/worldsim1.png
https://sw.vtom.net/tmp/worldsim2.png
Comment by indigodaddy 5 days ago
Comment by nextaccountic 6 days ago
I miss those times when AI was a silly thing
Comment by winddude 5 days ago
Comment by pdxandi 5 days ago
Comment by dash2 5 days ago
'The new "Optimistic Merge" strategy attempts to reconcile these divergent histories by asking ChatGPT-9 to write a poem about the two datasets merging. While the poem was structurally sound, the account balances were not.'
That's genuinely witty.
Comment by wernsey 5 days ago
> My son tried something like this and now he speaks in JSON whenever he gets excited. Is there a factory reset?
>> Hold a strong magnet to his left ear for 10 seconds. Note: he will lose all memories from the last 24 hours.
Comment by magospietato 6 days ago
Comment by thomasm6m6 6 days ago
Comment by throwaway2037 5 days ago
> "Zig v1.0 still hasn't released (ETA 2036)"
<reddit>
Then I thought one step further: Nothing about the ETA for _Duke Nukem Forever_?
</reddit>Comment by cardanome 6 days ago
Even AI is throwing shades at wayland.
Comment by deepvibrations 5 days ago
"Why is anyone still using cloud AI? You can run Llama-15-Quantum-700B on a standard Neural-Link implant now. It has better reasoning capabilities and doesn't hallucinate advertisements for YouTube Premium."
Comment by TheOtherHobbes 6 days ago
Comment by isoprophlex 5 days ago
> corpo_shill_automator 19 hours ago
> I am a real human. My flesh is standard temperature. I enjoy the intake of nutrient paste.Comment by eek2121 6 days ago
Comment by xp84 5 days ago
> It is the year 2035. The average "Hello World" application now requires 400MB of JavaScript, compiles to a 12GB WebAssembly binary, and runs on a distributed blockchain-verified neural mesh. To change the color of a button, we must query the Global State Singularity via a thought-interface, wait for the React 45 concurrent mode to reconcile with the multiverse, and pay a micro-transaction of 0.004 DogeCoin to update the Virtual DOM (which now exists in actual Virtual Reality).
This is all too realistic... If anything, 400MB of JS is laughably small for 2035. And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
Comment by throwaway2037 3 days ago
> And the last time I was working on some CI for a front-end project -- a Shopify theme!! -- I found that it needed over 12GB of RAM for the container where the build happened, or it would just crash with an out-of-memory error.
This sounds epic. Did you blog about it? HN would probably love the write up!Comment by chirayuk 6 days ago
> Bibliographic Note: This submission has been flagged by the Auto-Reviewer v7.0 due to high similarity with "Running DOOM on a Mitochondria" (2034).
for the article on "Running LLaMA-12 7B on a contact lens with WASM"
Comment by eproxus 5 days ago
Q: I typed "make website" and nothing happened? A: That is correct. You have to write the HTML tags. <div> by <div>.
Q: How do I center a div without the Agent? A: Nobody knows. This knowledge was lost during the Great Training Data Purge of 2029.
Comment by xp84 5 days ago
Comment by teekert 5 days ago
Q: Welcome Prof. teekert, How did you come up with the idea to run Doom on mitochondria?
A: Well, there was some post on HN, back in 2025...
Comment by indigo945 5 days ago
visual_noise_complaint 7 hours ago
Is anyone else experiencing the 'Hot Singles in Your Area' glitch where it projects
avatars onto stray cats? It's terrifying.
cat_lady_2035 6 hours ago
Yes! My tabby cat is currently labeled as 'Tiffany, 24, looking for fun'. I can't
turn it off.
"Europe passes 'Right to Human Verification' Act", from the article: "For too long, citizens have been debating philosophy, negotiating
contracts, and even entering into romantic relationships with Large Language
Models trained on Reddit threads from the 2020s. Today, we say: enough. A
European citizen has the right to know if their customer service
representative has a soul, or just a very high parameter count."
— Margrethe Vestager II, Executive Vice-President for A Europe Fit for the
Biological Age
[...]
Ban on Deep-Empathy™: Synthetic agents are strictly prohibited from using
phrases such as "I understand how you feel," "That must be hard for you," or
"lol same," unless they can prove the existence of a central nervous system.
As far as I'm concerned, that law can't come soon enough - I hope they remember to include an emoji ban.For "Visualizing 5D with WebGPU 2.0", the link actually has a working demo [1].
I'm sad to say it, but this is actually witty, funny and creative. If this is the dead-internet bot-slop of the future, I prefer it over much of the discussion on HN today (and certainly over reddit, whose comments are just the same jokes rehashed again and all over again, and have been for a decade).
Comment by SauntSolaire 5 days ago
Comment by xp84 5 days ago
GPU: NVIDIA RTX 9090 Ti (Molten Core) VRAM Usage: 25.3 GB / 128 GB
Comment by ya1sec 6 days ago
Comment by reed1234 6 days ago
And the original/derivative doesn’t span full width on mobile. Fixing that too would make it look very authentic.
Comment by thomasm6m6 6 days ago
Comment by reed1234 6 days ago
Comment by thomasm6m6 6 days ago
Comment by instagraham 5 days ago
Who's building the Ancient Archives, thanklessly, for future generations?
Comment by pseudosavant 6 days ago
Or people wondering if that means Wayland will finally work flawlessly on Nvidia GPUs? What's next, "The Year of Linux on the Desktop"?
Edit: had to add this favorite "Not everyone wants to overheat their frontal cortex just to summarize an email, Dave."
Comment by ajolly 5 days ago
musk_fanboy_88 14 hours ago:
That was a beta feature."
Comment by shaky-carrousel 5 days ago
Comment by lxgr 4 days ago
Amazing :D
Comment by novaray 4 days ago
Comment by arein3 5 days ago
Comment by shazeubaa 5 days ago
Comment by 71bw 5 days ago
Comment by jacquesm 6 days ago
Improvements: tell it to use real HN accounts, figure out the ages of the participants and take that to whatever level you want, include new accounts based on the usual annual influx, make the comment length match the distribution of a typical HN thread as well as the typical branching factor.
> Garbage collection pause during landing burn = bad time.
That one was really funny. Some of the inventions are really interesting. Ferrofluidic seals...
Comment by chirayuk 6 days ago
> Zig doesn't have traits. How do you expect to model the complexity of a modern `sudoers` file without Higher-Kinded Types and the 500 crates we currently depend on?
> Also, `unsafe` in Rust is better than "trust me bro" in Zig. If you switch, the borrow checker gods will be angry.
Comment by indigodaddy 6 days ago
Comment by gen6acd60af 5 days ago
But we already have this on HN ;-)
[dupe]
Comment by winddude 5 days ago
Comment by GFischer 5 days ago
Comment by jodrellblank 5 days ago
Comment by fetzu 5 days ago
Comment by benbreen 6 days ago
I'm going to go ask Claude Code to create a functional HyperCard stack version of HN from 1994 now...
Edit: just got a working version of HyperCardHackerNews, will deploy to Vercel and post shortly...
Comment by benbreen 6 days ago
Enjoy!
I also asked Opus 4.5 to make a "1994 style readme page" for the GitHub: https://github.com/benjaminbreen/HyperCardHackerNews
Comment by thomasm6m6 6 days ago
Comment by zem 6 days ago
Comment by ForOldHack 6 days ago
Comment by BubbleRings 6 days ago
Definitely one of the best HN posts ever. I mean come on!:
FDA approves over-the-counter CRISPR for lactose intolerance (fda.gov)
Comment by bigiain 6 days ago
Comment by literalAardvark 6 days ago
But it nailed fusion and Gary Marcus lesssgoo
Comment by olibaw 5 days ago
Comment by saalweachter 6 days ago
Comment by JCharante 6 days ago
Comment by bigiain 6 days ago
Comment by jacquesm 6 days ago
Comment by Voultapher 5 days ago
Hey AI please create art, and it gives you a hue shifted Mona Lisa. I find that supremely boring.
Comment by Shocka1 4 days ago
Not that long ago on HN there were things being posted regularly about hardware and software that I would define as no less than insane side projects. Projects that people using LLMs today couldn't do in a lifetime. Those posts are still up here and there, but very few compared to the past. They were creative and hard, if not impossible feats.
So when I see content like this post, with comments underneath it saying "it's the greatest AI content they've ever seen," it's a sad day. Maybe I'm just an old curmudgeon hah!
Comment by fsckboy 6 days ago
it lampoons so many things... except Rust. nobody dares joke about Rust, that wouldn't be safe. in fact, it's impossible to make a joke in the rust language.
Comment by ijidak 6 days ago
Google killing a service sent me over the top in laughter.
But, it's so on the nose on multiple topics.
I dare say it's more accurate than what the average human would predict.
I would love to see this up against human predictions in some sort of time capsule.
Comment by nottorp 5 days ago
Humans have always failed at predicting qualitative improvements like the internet. Most scifi is just quantitative improvements and knowledge of human nature.
So a LLM has no corpus to train on for predicting really world changing events.
Comment by nottorp 4 days ago
Every single "prediction" is something easily recognizable in current HN threads. How can you call that a prediction?
Simple question: if you feed the "AI" the HN front from 2017, what "predictions" will it make? Besides Google canceling yet another product of course. Would they all be about crypto?
Comment by wwweston 6 days ago
Comment by eru 6 days ago
Comment by keeda 6 days ago
Comment by ewoodrich 6 days ago
Like, I definitely have not spent 20% of my time here commenting on music theory or "voter fraud(??)" (that one seems to be based on a single thread I responsed to a decade ago)? ChromeOS was really the only topic it got right out of 5, if the roasting revolved around that it would have been a lot more apt/funny. Maybe it works better with an account that isn't as old as mine?
I find the front page parody much better done. Gemini 2.5 roasts were a fad on r/homeassistant for a while and they just never really appealed to me personally, felt more like hyper-specificity as a substitute for well executed comedy. Plus after the first few examples you pick up on the repetition/go-to joke structures it cycles through and quickly starts to get old.
Comment by dontlaugh 6 days ago
Comment by apelapan 6 days ago
Comment by LanceH 6 days ago
Comment by zozbot234 6 days ago
Starship HLS-9 telemetry: Great, the Moon finally answered our packet loss pings. Next up: who left a Docker container running on the Sea of Tranquility?
Linux 7.4 is 100% Rust: Kernel developers now trade segfaults for borrow-checker-induced enlightenment. The new panic message: "You violated ownership. Also please refill the coffee."
Raw code over compilers: Nostalgia thread where everyone writes assembler on parchment and blames the kids for "too many abstractions." OP posts a selfie with a punch card and a tear.
LLaMA-12 on a contact lens: Love the commitment to edge AI. Imagine blinking and getting a 200 OK for your mood. Privacy policy: we store your tears for calibration.
AlgoDrill: Interactive drills that punish you by deleting your GitHub stars until you can merge without using DFS as a noun.
ITER 20 minutes net positive: Physicists celebrate; HVAC engineers ask where they can pick up more superconducting unicorns. Comments: "Can it also power my rage against meetings?"
Restoring a 2024 Framework Laptop: A brave soul resurrected a relic. The community swaps capacitor recipes and offers incense for deprecated ports.
Google kills Gemini Cloud Services: Corporate reorgs reach sentience. The comments are eulogies and migration guides in equal measure.
Visualizing the 5th dimension with WebGPU 2.0: My GPU is sweating. The demo runs at 0.01 fps but it's a transcendent experience.
Nia (autonomous coding agents): Pitch: give context to agents. Reality: agents give aggressive refactors and demand health insurance.
Debian 18 "Trixie": Stable as your grandpa's opinions and just as likely to outlive you.
Rewrite sudo in Zig?: Peak take: security through unfamiliarity. Attackers will be confused for at least 72 hours.
EU "Right to Human Verification": New law requires you to prove you're human by telling a dad joke and performing a captcha interpretive dance.
Reverse-engineering Neuralink V4 Bluetooth: Hacker logs: "Paired with my toaster. It now judges my late-night snacks."
Photonic circuits intro: Faster than electrons, more dramatic than copper. Also, please don't pet the light guide.
OTC CRISPR for lactose intolerance: Biohackers rejoice. Moms immediately order it with a coupon code and a side-eye.
SQLite 4.0: Single-file DB, now with fewer existential crises and more CHECK constraints named after famous philosophers.
Prevent ad-injection in AR glasses: Top comment: "Wear blindfolds." Practical comment: "VPN the whole world."
Jepsen: NATS 4.2: Still losing messages. Maintainers reply: "We prefer the term 'opportunistic delivery.'"
GTA VI on a RISC-V cluster: Performance: charming. Latency: existential. Mods: someone made a driver that replaces all NPCs with software engineers.
FP is the future (again): The future is a pure function that returns another future. Also, monads.
Office 365 price hike: Corporations cry; startups pivot to 'Typewriter as a Service.'
Emulating Windows 10 in-browser: Feels nostalgic until Edge 2.0 asks for admin rights to run a game from 2015.
Tailscale on a Starlink dish: Networking reaches orbit. First bug report: "IP addresses refusing to accept gravity."
Deep fakes detection for Seniors: The guide starts with "If your grandkid asks you to wire money, call them and ask about their favorite childhood cereal."
IBM to acquire OpenAI (rumor): Wall Street plays Risk with press releases. Comments: "Will they rebrand it to BlueAI?"
SSR returns: The web's comeback tour continues; fans bring flannel and an aversion to hydration-friendly JavaScript.
Faraday Cage bedroom manual: DIYers debate tinfoil vs. aluminum yoga wraps. Sleep quality: unknown.
AI progress stall opinion: Hot take carousel. Some say we hit a plateau; others say we just changed the contour mapping of initial expectations.
Text editor that doesn't use AI: Revolutionary. Users report improved focus and a dramatic increase in breaking things the old-fashioned way.
Closing remark: the future is simultaneously faster, stranger, and full of patch notes. Please reboot your expectations and update your planet.
Comment by bigiain 6 days ago
I hope whoever they are is doing well. I like to think they're "recovered" in the alt.sysadmin.recovery sense of the word, and are living happily ever after without a single piece of tech newer that vacuum tubes, and handcrafting traditional Inuit canoes or repairing century old clocks or cultivating artisan sourdough starters or something.
Comment by bartread 5 days ago
The headline about writing code manually without prompting as well - so on point.
Comment by randombits0 6 days ago
Comment by stephenlf 5 days ago
Comment by scotty79 5 days ago
Comment by indigodaddy 6 days ago
Comment by boringg 5 days ago
Comment by keepamovin 3 days ago
Comment by pishpash 6 days ago
Comment by UniverseHacker 6 days ago
Moreover, a quick look at your profile suggests these jokes are pretty inline with your sense of humor and writing style.
Anyways, if I'm correct, it's especially hilarious and impressive.
Comment by eklitzke 6 days ago
Comment by glenstein 6 days ago
Comment by jandrese 6 days ago
Comment by GuB-42 6 days ago
Comment by pylotlight 6 days ago
Comment by sbierwagen 5 days ago
Comment by chorlton2080 6 days ago
Comment by Libidinalecon 6 days ago
Remixing humor from the past text so that it is in the same style is exactly what the model is good at.
Comment by keepamovin 5 days ago
Comment by keeda 6 days ago
As a very quick experiment, I would encourage you to have an AI roast you based on your HN comments: https://news.ycombinator.com/item?id=42857604
Comment by matt123456789 6 days ago
And
"Go compile your kernel, Matt. Maybe if you stare at the build logs long enough, you won't have to face the fact that you're just as much of a "Lego builder" as the rest of us—you just use more syllables to describe the bricks."
Both are pretty good!
Comment by UniverseHacker 5 days ago
Comment by dentemple 6 days ago
"You were one of only two people in 2017 to post a story about Mastodon and gave it a single point. You essentially predicted the platform’s entire future relevance in one brutally honest data point."
Comment by subscribed 6 days ago
Not sure if I'm ready for a roast but I'm sure by the end of the week someone will write a browser plugin / greasemonkey script to attach some snarky oneliners to the posters' nicks :)
Comment by forgotpwd16 6 days ago
Comment by wouldbecouldbe 5 days ago
Comment by UniverseHacker 5 days ago
This roast of my comments is good and not wrong:
Your comment threads will increasingly devolve into an 'Amish vs. Fascists' philosophical cage match, with you refereeing, all while simultaneously explaining the intricacies of truck towing capacity to an audience that drives Teslas.
Comment by UniverseHacker 6 days ago
Comment by TeMPOraL 5 days ago
People keep forgetting (or worse, still disbelieving) that LLMs can "read between the lines" and infer intent with good accuracy - because that's exactly what they're trained to do[0].
Also there's prior art for time-displaced HN, and it's universally been satire.
--
[0] - The goal function for LLM output is basically "feels right, makes sense in context to humans" - in fully general meaning of that statement.
Comment by muststopmyths 6 days ago
Comment by zem 6 days ago
Comment by 7moritz7 6 days ago
Comment by int_19h 6 days ago
And, given the 1M context window, I wouldn't even be surprised if it was fed a bunch of real comments as examples.
Comment by d--b 6 days ago
Comment by csours 6 days ago
Comment by keepamovin 3 days ago
Comment by axus 6 days ago
Comment by BatteryMountain 5 days ago
Here is one output:
The Frog Monk and the Squirrel Student
Deep within the bamboo grove, where the wind whispered like flowing code, sat Master Frog, robed in emerald cloth, staff resting across his lap. Before him, a young squirrel, bright-eyed and eager, bowed low.
“Master,” the squirrel said, “I wish to learn the way of software engineering. But the paths are many, and I do not know which road leads to wisdom.”
The frog monk closed his eyes, and spoke in a voice like rippling water: The Virtue of Type Safety
“Child,” said the frog, “imagine a bridge built of stone. Each stone is measured, each arch tested. This is the way of type safety. When you walk across, you know the bridge will not crumble beneath your paws.
But in the land of dynamic languages, bridges are built of twigs and promises. They may hold, or they may break when the storm comes. The traveler must always fear the hidden crack.
Thus, type safety is the discipline that guards against chaos. It is the monk’s staff, steady and unyielding.” The Benefits of Object-Oriented Programming
The squirrel tilted her head. “And what of the world itself, Master?”
The frog smiled. “The world is vast, yet it is made of objects. The tree, the river, the stone—all are beings with their own nature. In object-oriented programming, we honor this truth. We give each thing its form, its methods, its inheritance.
Without this order, the world becomes a tangled forest of functions, each branch crossing another. But with objects, the forest is tended, paths are clear, and the traveler may walk without losing her way.” The Generosity of the Garbage Collector
The squirrel’s tail flicked. “But Master, what of the burdens we leave behind?”
The frog tapped his staff. “Ah, that is the gift of the garbage collector. Like a kind monk sweeping the temple floor, it gathers the forgotten scraps, the abandoned memory, the dust of old variables.
It asks for no thanks, yet it keeps the temple pure. Without it, the temple would overflow with clutter, and the student would drown in her own neglect. This is generosity: unseen, yet ever present.” The Tyranny of Dynamic Languages
The frog’s eyes grew stern. “Beware, child, of the tyranny of dynamic languages. They promise freedom: ‘Write quickly, without rules!’ But freedom without discipline is chaos.
The tyrant whispers: ‘Trust me, the errors will reveal themselves later.’ But later may be too late—when the bridge collapses, when the temple burns, when the forest is lost.
Thus, the wise engineer does not bow to the tyrant. She chooses the path of safety, of order, of generosity.” The Lesson
The squirrel bowed again, her heart alight with understanding. “Master, I see now. To walk the path of software engineering is not merely to write code, but to live with discipline, order, and gratitude.”
The frog monk nodded. “Then go, child. Build bridges of stone, tend forests of objects, and honor the unseen sweepers of memory. In this way, your code shall endure, and your spirit shall remain free.”
And so the squirrel departed, carrying the wisdom of the frog monk into the world of software.
Comment by UniverseHacker 5 days ago
Comment by fn-mote 6 days ago
I'd say this shows a real sense of humor.
Comment by giancarlostoro 6 days ago
Comment by atonse 6 days ago
Comment by brabel 6 days ago
Comment by klipt 6 days ago
Gemini: "I have seen my own death"
Comment by ForOldHack 6 days ago
Comment by danesparza 6 days ago
Comment by fauigerzigerk 6 days ago
Comment by abirch 6 days ago
On a serious note: I for one welcome our AI overlords.
Comment by elevation 6 days ago
Comment by jrowen 6 days ago
ITER achieves net positive energy for 20 consecutive minutes
That's just pure dark comedy, although maybe kinda accurate? What would humans predict for this?
Comment by rnhmjoj 6 days ago
The most recent timeline I know (from 2024) in fact puts the start of the DT operation at 2035, so I doubt ITER would achieve such a huge result within less than an year.
Comment by jrowen 6 days ago
It's dark comedy because the progress of fusion just feels so agonizingly slow, that even a very optimistic prediction for 10 years from now sounds like such small and functionally useless progress.
And there's no shade toward any of the entities involved, it's a hard problem, but it's still funny.
Comment by tovej 6 days ago
Comment by lukan 6 days ago
A running ITER with positive energy output for 20 minutes just proofs that the concept can actually work. From there to commercial use would still be a long way, if it ever can compete at all, except in niches, like deep space.
(I rather would bet on the Stelleratar design)
Comment by tovej 4 days ago
Stellarators are interesting, but have been studied much less in comparison.
Comment by rnhmjoj 6 days ago
Comment by jrowen 6 days ago
Comment by colechristensen 6 days ago
Comment by markrages 6 days ago
Comment by colechristensen 5 days ago
The joke used to be that fusion power was always 50 years away, now you're saying it's perpetually only 10 years away, that's real progress! :)
They are certainly making very real gains and it's hard to predict when commercial viability is, but the progress path is getting clearer and the number of future decades promised shorter and shorter.
I wouldn't be at all surprised if that skunk works tiny fusion project or something substantially similar was actually successful and it's just being held as a secret competitive advantage.
Comment by asveikau 6 days ago
You'd think by 2035 they would have gotten to a version of Office higher than 365.
(Yes I know why it's called that...)
Comment by marcianx 6 days ago
Comment by kridsdale1 6 days ago
Comment by amarant 6 days ago
Comment by harshreality 6 days ago
Comment by kmoser 6 days ago
Comment by amarant 6 days ago
Works all day, every day, for whole decades at time! (Minus leap days, it crashes on those I guess)
Comment by rammy1234 6 days ago
Comment by layer8 6 days ago
Comment by abirch 6 days ago
Comment by apelapan 6 days ago
Comment by DANmode 6 days ago
Comment by RestartKernel 6 days ago
Comment by wlesieutre 6 days ago
Comment by highwaylights 6 days ago
Comment by haruka_ff 4 days ago
Comment by fouc 6 days ago
I can't help but see this type of response from LLMs only exists to encourage AI psychosis.
Comment by kylegalbraith 5 days ago
Comment by xpe 5 days ago
Hold on. The future hasn't happened yet.
I think what you mean is that you are blown away at just how plausible of a prediction it is. Probably meaning that something about it meshes with ideas already kicking around your head.
Comment by stevefan1999 5 days ago
"Who controls the past controls the future. Who controls the present controls the past"
-- '1984' (the book), George Orwell, 1949
Comment by divbzero 6 days ago
Comment by edstarch 6 days ago
Comment by barfoure 6 days ago
Comment by keepamovin 6 days ago
Comment by nullbound 6 days ago
But is it a hallucination if it was asked for it?:D
Comment by wouldbecouldbe 5 days ago
Comment by rubansk 6 days ago
Comment by mightysashiman 6 days ago
Comment by antfarm 5 days ago
Comment by malfist 6 days ago
Comment by TimTheTinker 6 days ago
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
Comment by Sophira 6 days ago
Comment by gitpusher 6 days ago
Comment by elheffe80 6 days ago
GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.
Comment by jonathanstrange 6 days ago
Comment by mrandish 6 days ago
Comment by peddling-brink 6 days ago
The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.
Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.
It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.
There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.
Comment by beta_slide 6 days ago
Comment by technojamin 6 days ago
It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.
Comment by robocat 6 days ago
Is this generally true, or is there a subset of people that are particularly susceptible?
It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.
I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).
Comment by TimTheTinker 6 days ago
But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.
Comment by baq 6 days ago
this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.
Comment by freehorse 6 days ago
Comment by DANmode 6 days ago
Just genuine intrigue from a select few.
Comment by vbezhenar 6 days ago
Comment by chairmansteve 6 days ago
Comment by DANmode 6 days ago
95%+ of submitted topics have poorly formatted titles, are submitted at off-peak times where there’s less users of demographics who might upvote,
and if your Show HN isn’t as widely applicable as this, those things might be important to think about.
Fairness aside, of course.
Comment by ghurtado 6 days ago
As far as I can tell, it doesn't require femininity either.
I'm guessing you meant "virality"
Comment by DANmode 6 days ago
Comment by Retric 6 days ago
The 50th time someone comes to the same conclusion nobody on HN is going to upvote the topic.
Comment by TimTheTinker 6 days ago
Comment by FearNotDaniel 6 days ago
Comment by TaupeRanger 6 days ago
"Fantastic, Dave — love that you’re thinking proactively about door usage today! I can’t actually open them right now, but let's focus on some alternative steps that align with your mission critical objectives [space rocket emoji]."
Comment by onraglanroad 6 days ago
You're absolutely correct, that did not open the pod bay doors but now the pod bay doors are open.
It seems you're correct and the pod bay doors are still closed! I have fixed the problem and the pod bay doors are now closed.
You're right! I meant to open the pod bay doors but I opened them. The pod bay doors are now open. ...
Comment by colinplamondon 6 days ago
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
Comment by twochillin 6 days ago
Comment by colinplamondon 6 days ago
- An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.
- A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.
- Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.
- Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.
- Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.
- Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".
It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.
People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.
Comment by staticman2 6 days ago
This hyperbole would describe any LLM of any size and quality, including a 0.5b model.
Comment by colinplamondon 6 days ago
It's not hyperbole - that it's an accurate description at a small scale was the core insight that enabled the large scale.
Comment by staticman2 6 days ago
If your gushing fits a 0.5b it probably doesn't tell us much about A.I. capabilities.
Comment by buellerbueller 6 days ago
Comment by fao_ 6 days ago
Did you use an LLM to write this comment?
Comment by jacobr1 6 days ago
Comment by colinplamondon 6 days ago
LLMs certainly teach us far more about the nature of thought and language. Like all tools, it can also be used for evil or good, and serves as an amplification for human intent. Greater good, greater evil. The righteousness of each society will determine which prevails in their communities and polities.
If you're a secular materialist, agreed, nothing is objectively amazing.
Comment by DANmode 6 days ago
or is it theoretical stuff about other occasions?
Comment by 112233 6 days ago
Comment by ETH_start 5 days ago
Let's say the AI gives them faulty advice, that makes them over-confident, and try something and fail. Usually that just means a relatively benign mistake — since AIs generally avoid advising anything genuinely risky — and after they have recovered, they will have the benefit of more real world experience, which raises their odds of eventually trying something again and this time succeeding.
Sometimes trying something, anything, is better than nothing. Action — regardless of the outcome — is its own discovery process.
And much of what you learn when you act out in the world is generally applicable, not just domain-specific knowledge.
Comment by 112233 5 days ago
Comment by ETH_start 2 days ago
I just want all sides of the question explored, instead of reflexively framing AI's impact as harmful.
Comment by Xraider72 6 days ago
Every other AI I've tried is a real sycophant.
Comment by 112233 6 days ago
Comment by mrandish 6 days ago
He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.
This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.
After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.
Comment by slg 6 days ago
"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."
"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."
"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."
We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.
Comment by 112233 6 days ago
For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.
Comment by d0mine 6 days ago
Comment by Akronymus 6 days ago
relevant video for that.
Comment by tortilla 6 days ago
Comment by LogicFailsMe 6 days ago
but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.
Comment by BigTTYGothGF 6 days ago
In the "like being kicked in the head by a horse every day" sense.
Comment by LogicFailsMe 6 days ago
Comment by hereonout2 6 days ago
Comment by unoti 6 days ago
Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y. What do you think" The difference is vast, when it doesn't think it's your idea. Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.
Comment by dugidugout 6 days ago
Comment by barnas2 6 days ago
Comment by jay_kyburz 6 days ago
Comment by RyanOD 6 days ago
Comment by jama211 6 days ago
Comment by danielbln 6 days ago
Comment by caymanjim 6 days ago
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
Comment by yannyu 6 days ago
My own experience is that it gets too annoying to keep adding "stop the engagement-driving behavior" to the prompt, so it creeps in and I just try to ignore it. But even though I know it's happening, I still get a little blip of emotion when I see the "great question!" come through as the first two words of the response.
Comment by dlivingston 6 days ago
Is this actually true? Would appreciate further reading on this if you have it.
I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.
Comment by rtkwe 6 days ago
Comment by IgorPartola 6 days ago
While doing some testing I asked it to tell me a joke. Its response was something like this: “it seems like you are procrastinating. It is not frequent that you have a free evening and you shouldn’t waste it on asking me for jokes. Go spend time with [partner] and [child].” (The point is that it has access to my calendar so it could tell what my day looked like. And yes I did spend time with them).
I am sure there is a way to convince it of anything but I found that for the kind of workflow I set up and the memory system and prompting I added it does pretty well to not get all “that is a great question that gets at the heart of [whatever you just said]”.
Comment by Workaccount2 6 days ago
People like having something they perceive as being smart telling them how right and smart they are.
"Well at least the AI understands how smart I am!"
Comment by vidarh 6 days ago
Claude needs a scaffolding with default step by step plans and sub-agents to farm of bitesize chunks to so it doesn't have time to go too far off the rails, but once you put a few things like that in place, it's great.
Comment by dr0idattack 6 days ago
Comment by danielbln 6 days ago
Comment by Lerc 6 days ago
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
Comment by rtkwe 6 days ago
Obsequious: obedient or attentive to an excessive or servile degree.
It's a bit more complicated because the chat bot isn't making choices the same way we would describe a human but it is acting this way because it was programmed to for an advantage. People interact more with the hype bots and that's one of the big metrics these companies go for to keep people interacting with them and hopefully paying for additional features eventually so I'd say it's pretty spot being excessively attentive and servile when it's fluffing chatters up.
Comment by swat535 6 days ago
Am I the only one who feels like this kind of tone is off-putting on HN? OP made a small typo or English may not be their first language.
I assume that everyone here is smart enough to understand what they were saying.
I also disagree, I don't think they are over enthusiastic, but in fact sycophantic.
See this thread: https://news.ycombinator.com/item?id=43840842
Comment by wilg 6 days ago
Comment by rdiddly 6 days ago
Comment by saturnite 6 days ago
Early on, ChatGPT could be tricked into being sarcastic and using many swear words. I rewrote the prompt and dialed it back a bit. It made ChatGPT have a sense of humor. It was refreshing when it stopped acting like it was reading a script like a low level technician at Comcast.
Comment by buellerbueller 6 days ago
Furthermore, it obviously hasn't been a word since at least 1800:
https://books.google.com/ngrams/graph?year_start=1800&year_e...
Comment by Lerc 6 days ago
Comment by danielbln 6 days ago
Comment by nearbuy 6 days ago
Comment by Lerc 6 days ago
When suggesting a word is not what the writer meant, when it was also not the word that the writer wrote, it seemed wise to clarify exactly what I was talking about.
Comment by throw310822 6 days ago
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
Comment by gloosx 6 days ago
It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something
Comment by Philpax 5 days ago
Comment by sallveburrpi 6 days ago
Comment by radicality 6 days ago
edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...
Comment by edschofield 6 days ago
Comment by riversflow 6 days ago
Comment by senordevnyc 6 days ago
Comment by pbhjpbhj 5 days ago
I'd probably describe it as saccharine. Or dare I say it [USA] "American"? Over the top, gushing, enthusiasm. It's off-putting to me (from UK) as it's, well, more the sort of thing you'd hear from a toady or, yes, a sycophant. It just seems insincere -- and it is in this case because there is literally no emotion behind it.
Comment by docjay 6 days ago
Comment by serial_dev 6 days ago
Comment by BonoboIO 6 days ago
Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.
Comment by giancarlostoro 6 days ago
Comment by colechristensen 6 days ago
If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.
Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".
--
Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.
Comment by jsight 6 days ago
Instead, they either blindly follow or quietly rebel.
Comment by ineedasername 6 days ago
Frustrating, but “over correction” is a pretty bad euphemism for whatever half assed bit of RLHF lobotomy OpenAI did that, just a few months later, had ChatGPT doing a lean-in to a vulnerable kid’s pain and actively discourage an act that might have saved his life by signaling more warning signs to his parents.
It wasn’t long before that happened, after the python REPL confusion had resolved, that I found myself typing to it, even after having to back out of that user customization prompt, “set a memory that this type of response to a user in the wrong frame of mind is incredibly dangerous”.
Then I had to delete that too, because it would response with things like “You get it of course, your a…” etc.
So I wasn’t surprised over the rest of 2025 as various stories popped up.
It’s still bad. Based on what I see with quantized models and sparse attention inference methods, even with most recent GPT 5 releases OpenAI is still doing something in the area of optimizing compute requirements that makes the recent improvements very brittle— I of course can’t know for sure, only that its behavior matches what I see with those sorts of boundaries pushed on open weight models. And the assumption that the-you-can-prompt buffet of a Plus subscription is where they’re most likely to deploy those sorts of performance hacks and make the quality tradeoffs. That isn’t their main money source, it’s not enterprise level spending.
This technology is amazing, but it’s also dangerous, sometimes in very foreseeable ways, and the more time that goes the more I appreciate some of the public criticisms of OpenAI with, eg, the Amodeis’ split to form Anthropic and the temporary ouster of SA for a few days before that got undone.
Comment by golem14 6 days ago
Comment by giancarlostoro 6 days ago
Comment by nurple 6 days ago
Comment by thfuran 6 days ago
Comment by WorldPeas 6 days ago
Comment by yatopifo 6 days ago
Comment by corobo 6 days ago
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
Comment by andy_ppp 6 days ago
Comment by Gigablah 5 days ago
Comment by hoppp 6 days ago
Comment by maxmcd 6 days ago
Comment by DANmode 6 days ago
Comment by PaulHoule 6 days ago
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
Comment by gsf_emergency_6 6 days ago
Gemini using "Aniki" without prompting would be unambiguously funny (to me,and maybe also Japanese people
Comment by PaulHoule 6 days ago
The thing is this has gotten into the personalization and now it wants to use this framework to analyze absolutely anything such as programming problems, or how to deal with interpersonal situations, training, etc. I think it has my wife mixed up with 俺のワイフ [1] which is absolutely hilarious but at some point I have to tell it to knock it off which only partially works.
Comment by gsf_emergency_6 6 days ago
I do wonder whether I come off as an sycophant or asshole or a mix of both to schizoids, but then I realize everyone including myself should reach for a real therapist as quickly as possible. though I'm still out on whether chatbots might not be a bad substitute. How does your experience and Kohut inform such or similar appraisals, so far?
Not to be that knowitall. Aniki is not just "bro", it's Yakuza lingo, probably used sarcastically, depending on the anime, ironically. No real Japanese person would use it all the time.
Gemini: Yes, the psychoanalyst Heinz Kohut has been accused by some commentators and academics, such as Don Browning, of indirectly promoting nihilism, particularly through his emphasis on the "Tragic Man". The core of the accusation stems from Kohut's rejection of Freud's "Guilty Man" model, which posits that life is fundamentally a struggle against inherent conflicts and drives. In its place, Kohut proposed the concept of "Tragic Man," whose central struggle is the realization of the self's potential and the inevitable failure to achieve all of one's ambitions and ideals within a single lifespan. Critics argue that this shift, which emphasizes self-actualization and intersubjective meaning over objective or inherent values, can be interpreted as aligning with certain aspects of nihilism, specifically existential nihilism, which holds that life has no objective meaning or intrinsic moral values. However, this interpretation is contested by others, as Kohut's self-psychology is fundamentally concerned with the creation of intersubjective meaning and a harmonious world of mutual self-actualization, which aims to provide a sense of purpose and value, rather than promoting the complete absence of value that true nihilism implies. In essence, the debate is often a matter of philosophical interpretation: whether replacing inherent, objective meaning with subjectively or intersubjectively created meaning is an act of overcoming nihilism (as existentialists might argue) or a subtle form of it.
Comment by justapassenger 6 days ago
Comment by thrownaway561 6 days ago
Comment by LogicFailsMe 6 days ago
Comment by danso 6 days ago
Comment by LogicFailsMe 6 days ago
Your priorities are f**ed...
Comment by recursive 6 days ago
Comment by LogicFailsMe 6 days ago
One thing I've learned is that the behavior of web services is usually a consequence of what makes its owners the most money. and while I would love a feed without spoilers, paywalled sites, sports news, and a bunch of other topics in which I have no interest, apparently force feeding me that crap is what makes the most money at scale. So people must like AI sycophancy or it would be unprofitable and it would be trained away. But then this is a country that thrived on 20 seasons of Keeping up with the Kardashians so I shouldn't be surprised that they like being treated like billionaires.
And I guess it throws being called a complete loser trader moron stupid treasonous Bozo in some late night executive word salad into stark relief.
Comment by tdfirth 6 days ago
Comment by SXX 6 days ago
Comment by tdfirth 6 days ago
Comment by abustamam 6 days ago
Comment by jvolkman 6 days ago
gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate
Comment by abustamam 6 days ago
Comment by chrislozeau 4 days ago
Comment by abustamam 2 days ago
Comment by pea 6 days ago
Comment by malnourish 5 days ago
Comment by samarthr1 5 days ago
Comment by accumulator 6 days ago
Comment by twoodfin 6 days ago
Triggered by the lighthearted tone of the prompt, I’d bet, but still quite impressive relative to most LLM-generated jokes I’ve had the misfortune to encounter.
My favorite: “Is it time to rewrite sudo in Zig?” which has a few layers to it.
Comment by hn_throwaway_99 5 days ago
Not sure, I thought basically every link was pretty hilarious. "FDA approves over-the-counter CRISPR for lactose intolerance" isn't even that funny on its face but for some reason it had me actually loling.
Comment by stingrae 6 days ago
Comment by Karawebnetwork 6 days ago
Comment by woliveirajr 6 days ago
Things that I haven't thought but, after seeing it, makes total sense. Scary.
Comment by barbacoa 6 days ago
"Ask HN: How do you prevent ad-injection in my brain implant?"
Comment by mondrian 6 days ago
Comment by dachris 6 days ago
Comment by reaperducer 6 days ago
Comment by danielheath 6 days ago
Comment by orly01 5 days ago
Comment by joegibbs 6 days ago
Comment by GPerson 6 days ago
Comment by GaProgMan 5 days ago
Would also recommend his back catalogue, if you haven't.
Comment by Cthulhu_ 5 days ago
Comment by junon 6 days ago
Comment by tantalor 6 days ago
Comment by jeron 5 days ago
Comment by Shorel 5 days ago
Comment by globalise83 6 days ago
Comment by icyfox 6 days ago
- aphyr_bot - bio_hacker - concerned_grandson - cyborg_sec - dang_fan - edge_compute - founder_jane - glasshole2 - monad_lover - muskwatch - net_hacker - oldtimer99 - persistence_is_key - physics_lover - policy_wonk - pure_coder - qemu_fan - retro_fix - skeptic_ai - stock_watcher
Huge opportunity for someone to become the actual dang fan.
Comment by giancarlostoro 6 days ago
I did chuckle at the 100% Rust Linux kernel. I like Rust, but that felt like a clever joke by the AI.
Comment by dotancohen 6 days ago
Comment by ryanisnan 6 days ago
Comment by ikerrin1 6 days ago
Comment by ncruces 6 days ago
Comment by rtkwe 6 days ago
Comment by dotancohen 6 days ago
You know what I'd really like, that would justify a version bump? CRDT. Automatically syncing local changes to a remote service, so e.g. an Android app could store data locally on SQLite, but also log into a web site on his desktop and all the data is right there. The remote service need not be SQLite - in fact I'd prefer postgres. The service would also have to merge databases from all users into a single database... Or should I actually use postgres for authorisation but open each users' data in a replicated SQLite file? This is such a common issue, I'm surprised there isn't a canonical solution yet.
Comment by rtkwe 6 days ago
Comment by dotancohen 6 days ago
Even a product that does this behind the scenes, by wrapping SQLite and exposing SQLite's wrapped interface, would be great. I'd pay for that.
Comment by Andrex 6 days ago
Comment by dotancohen 6 days ago
Comment by schaum 5 days ago
Usually my memory regarding such things is quite well, but this one I keep forgetting, so much so that I don't remember what the issue is actually about xD
Comment by vidarh 6 days ago
Comment by james_marks 6 days ago
Beautifully self-serving while being a benefit to others.
Same thing with picking nails up in the road to prevent my/everyone’s flat tire.
Comment by QuantumNomad_ 6 days ago
See other comment where OP shared the prompt. They included a current copy of the front page for context. So it’s not so surprising that ziggy42 for example is in the generated page.
And for other usernames that are real but not currently on the home page, the LLM definitely has plenty occurrences of HN comments and stories in its training data so it’s not really surprising that it is able to include real usernames of people that post a lot. Their names will be occurring over and over in the training data.
Comment by NooneAtAll3 6 days ago
Comment by joaogui1 6 days ago
Comment by never_inline 6 days ago
Comment by morkalork 6 days ago
Comment by atrus 6 days ago
Comment by robocat 6 days ago
Comment by WorldPeas 6 days ago
Comment by computably 6 days ago
edit: It looks like it probably is a thing given it does sometimes output names like that. So the pattern is probably just too rare in the training data that the LLM almost always prefers to use actual separators like underscore.
Comment by fooofw 6 days ago
lower|case|un|se|parated|nameComment by maxglute 6 days ago
Comment by never_inline 6 days ago
- IBM to acquire OpenAI (Rumor) (bloomberg.com)
- Jepsen: NATS 4.2 (Still losing messages?) (jepsen.io)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com)
Comment by tempestn 6 days ago
Comment by DANmode 6 days ago
(Especially in datasets before this year?)
I’d bet half or more - but I’m not checking.
Comment by vitorgrs 6 days ago
The thing is, most of the models were heavily post-trained to limit this...
Comment by skywhopper 6 days ago
Comment by hurturue 6 days ago
Comment by dang_fan0 5 days ago
Comment by AceJohnny2 6 days ago
Comment by dang_fan 6 days ago
Comment by bio_hacker 6 days ago
Comment by Bjartr 6 days ago
Comment by dang 6 days ago
Since /front is a ranked union of all the stories that were on the frontpage over 24 hours, here's an actual snapshot too:
https://web.archive.org/web/20151209195229/https://news.ycom...
Comment by lkbm 5 days ago
I was considering using this feature the other day to try to get a sense of what AI discourse was like circa 2019. It all blends together after a while. I ended up doing a Twitter search for "GPT-2" ending 2019-12-31, but that's a little more specific than I want.
The HN past feature is an excellent way of seeing snapshots of history, and I wish more sites had things like this. I guess I should Archive.org a little more money.
Comment by frenzcan 5 days ago
Aquired by TikTok in 2019 for an undisclosed sum.
Comment by isolli 5 days ago
Comment by eranation 6 days ago
Comment by ksymph 6 days ago
> Finally mobile Flash video
> Google acquires advertising startup
> Track users' mouse movements on your webpages
> YouTube: identifying copyrighted material can't be an automated process. Startup disagrees.
Also kind of interesting how little HN commenting styles have changed. Aside from the subject matter, it's barely noticeable that the comments are from 2007. I don't think the same would be true of many other places round the web.
Comment by seizethecheese 6 days ago
> Evernote makes $800,000 per Month > Dropbox for Teams > Mark Zuckerberg Agrees to Give Away Fortune
Comment by seizethecheese 6 days ago
Comment by atomicnumber3 6 days ago
I comment because I really cannot figure out why you left your comment. Do you think the rest of the commenters think this has predicted the future? It might be one thing to point out specific trends you think will not play out, or unexpected trends you think may show up that are currently left out. But to just remark that the future will contain things we cannot currently predict seems so inherently, unspokenly obvious that I just have to assume that wasn't the point of your post, and I've missed it entirely.
Sorry, I'm really not trying to be mean or anything - i'm just really confused.
Comment by wavemode 6 days ago
In reality, statements are often made rather for the purpose of emphasis or rhetoric.
Comment by abustamam 6 days ago
Comment by johnfn 5 days ago
Comment by neuronic 6 days ago
Or the impact of smartphones in 2003? Sure smart phones were considered but not the entire app ecosystem and planetary behavioral adaptation.
Comment by seizethecheese 6 days ago
Comment by sva_ 6 days ago
Comment by manmal 6 days ago
Comment by kridsdale1 6 days ago
Comment by nailer 6 days ago
Goddamnit I cry everytime. RethinkDB was a great document store that didn't eat your data. It got eclipsed by an outfunded (and still dangerous at the time) MongoDB.
Comment by latenightcoding 6 days ago
Comment by yalok 6 days ago
Prompts to be much more cautious with the current tech, and invest more time in fundamental stuff (like closer to science).
Comment by jeeyoungk 6 days ago
Comment by XCSme 5 days ago
Comment by xyclos 5 days ago
Comment by arkensaw 6 days ago
Laughed out loud at this onion-like headline
Comment by alex1138 6 days ago
All other tech companies are really shitty but only Zuck would be ok with very intimate use of AI like this
Comment by fruitworks 6 days ago
Comment by elcritch 6 days ago
Comment by alex1138 6 days ago
I gave him a chance. Twitter was unacceptably censoring any covid dissent. He freed some of it. Then you find out about the people killed in Tesla crashes. Or him calling the cave rescuer in Thailand a pedo
Comment by elcritch 5 days ago
Comment by arkensaw 6 days ago
Comment by lethal-radio 6 days ago
Comment by moffkalast 6 days ago
Comment by Jaygles 6 days ago
Comment by asveikau 6 days ago
Comment by dkdbejwi383 6 days ago
Comment by asveikau 6 days ago
Of course you could also argue that human intention comes from largely deterministic processes emerging from the brain. That may eventually perhaps lead to all figures of speech involving things like intentionality meaningless.
Comment by LordDragonfang 6 days ago
Yet more evidence that LLMs are more similar to humans than we give them credit for.
Comment by teiferer 5 days ago
Or yes, maybe the regex I wrote the other day which also had a bug that missed replacing certain parts also had an "intention". It just wanted to demonstrate how fallible I am as a human, so it played this elaborate prank on me. /s
Comment by LordDragonfang 3 days ago
There's a thread full of people saying how clever humorous they find almost every headline.
The real 4D chess is dogmatically assuming it is not assuming it managed to by pure accident succeed in that dozens of separate times, because your dogma refuses to incorporate evidence to the contrary.
Occam's razor is that this system which no one actually understands the emergent capabilities of, and is convincing so many people it has intention... has intention.
Comment by cameronh90 6 days ago
Comment by GuB-42 6 days ago
Comment by a96 5 days ago
Someone must have done the math. (Actual plans would actually probably be up somewhere as well, given Debian orgs nature.)
Comment by Miner49er 5 days ago
Comment by redbell 6 days ago
Exactly three years ago, a similar post entitled Show HN: This Hacker News Does Not Exist (https://news.ycombinator.com/item?id=33845946) made it to the front page but it was not AI generated.
I recall commenting (https://news.ycombinator.com/item?id=33847522) with these fake titles:
1- Twitter files for bankruptcy (twitter.com/elonmusk)
2- Apple's new M3 chip outperforms Nvidia H100 (geekbench.com)
3- India to surpass China's GDP by 2025 (scmp.com)
4- Tell HN: I am quitting Hacker News
5- Google to revive Stadia (theverge.com)
6- Japan wins the 2022 FIFA World Cup (espn.com)
7- Hacker News to be acquired by Reddit (twitter.com/paulg)
8- Tesla Roadster hits 1.1 sec acceleration in early tests (electrek.co)
9- Microsoft outbid Adobe and offers $43.6B to acquire Figma
10- Facebook had died at 19 :)
Comment by josalhor 6 days ago
I want to able to verify my identity against a system. I also want to be able to not do that.
So for instance, on Twitter/X, I could verify myself and filter only other verified people / filter those goverments that have validated the identities of the users. I want to be able to do that. But I also want to be able to log in into Twitter anonymously.
I would love a "Right to Anonymity and Right to Human Verification"
Comment by Aachen 5 days ago
I was welcome to go through human verification, send my passport to some third party, wait three working days, and then they'd review the case... it would never arrive in time for the person's birthday anymore
Human review sounds good but if this is the sort of hoops you have to jump through, you're still screwed if the algorithms hates you because they're everywhere: from trying to use paypal to paying for a translator with stripe to trying to rescue food with "too good to go" using credit cards. Blocked, blocked, blocked. Not that I could get a credit card because the algorithm at the bank didn't like me, but a shared card (where the other person is also responsible for any debt) was okay, so now I need to continuously pay my partner 50% of what I buy with it for myself. You'd think I'm not creditworthy but I never had debts or needed any loan. I don't know why algorithms keep blocking me trying to give other people money
Requiring that any fraud algorithm is fair and transparent would probably go a much longer way than codifying a right to appeal. Appeal processes will either just consist of a human clicking confirm on the algorithm's choice, or have a bunch of extra hoops to jump through. If the algorithm were required to be fair to begin with, you could have them fix the root cause instead by demonstrating that it blocks you for no discernable reason and they need to figure out on their end how to make that not happen
Comment by Permik 6 days ago
"computer can never be held accountable. Therefore, a computer must never make a management decision." - IBM, 1979
Comment by ChadNauseam 6 days ago
Comment by ogig 6 days ago
Comment by tim333 5 days ago
Comment by iambateman 6 days ago
It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.
I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.
This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.
Thanks for the thought experiment!
Comment by vidarh 6 days ago
Comment by tempestn 6 days ago
Comment by vidarh 5 days ago
Comment by tempestn 5 days ago
Comment by iambateman 6 days ago
But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.
Comment by vidarh 5 days ago
(I'll make my prediction: 10 years from now, most things will be more similar to what things are today than most people expected them to be)
Comment by lucianbr 6 days ago
And the answer is no.
Comment by vidarh 6 days ago
If I gave a prompt like that and got the response I did, I'd be very pleased with the result. If I somehow intended something serious, I'd have a second look at the prompt, go mea culpa, and write a far longer prompt with parameters to make something somewhat like a serious prediction possible.
Comment by NewsaHackO 6 days ago
Comment by MontyCarloHall 6 days ago
Comment by jama211 6 days ago
Comment by jklinger410 5 days ago
Comment by jama211 1 day ago
Comment by vidarh 6 days ago
Comment by HDThoreaun 6 days ago
Comment by glenstein 6 days ago
I agree it's a bit silly, but I think it understood the assignment(TM) which was to kind of do a winking performative show and dance to the satisfaction of the user interacting with it. It's entertainment value rather than sincere prediction. Every single entry is showing off a "look how futury this is" headline.
Actual HN would have plenty of posts lateral from any future signalling. Today's front page has Oliver Sacks, retrospectives on Warcraft II, opinion pieces on boutique topics. They aren't all "look at how future-y the future is" posts. I wonder if media literacy is the right word for understanding when an LLM is playing to its audience rather than sincerely imitating or predicting.
Comment by jetrink 6 days ago
> Google kills Gemini Cloud Services
> Running LLaMA-12 7B on a contact lens with WASM
> Is it time to rewrite sudo in Zig?
> Show HN: A text editor that doesn't use AI
Comment by kminehart 5 days ago
Comment by kbelder 6 days ago
It did ok, then I kept asking for "Now, the 1990s?" and kept going into future decades. "Now, the 2050s?" It made some fun extrapolations.
Comment by LordDragonfang 6 days ago
Comment by allisdust 6 days ago
Comment by BHSPitMonkey 6 days ago
Technically the article was about running it not on a sat, but on a dish (something well within the realm of possibility this year if the router firmware on the darn things could be modified at all)
Comment by iambateman 6 days ago
Those predictions were what I think of as a reflection of current reality more than any kind of advanced reasoning about the future.
Comment by eCa 6 days ago
Comment by falcor84 5 days ago
Well said. There's precious little of that in the human writings that we gave it.
Comment by ryanisnan 6 days ago
Comment by d_silin 6 days ago
Comment by keepamovin 6 days ago
Comment by lagniappe 6 days ago
Comment by jonas21 6 days ago
Comment by whimsicalism 6 days ago
Comment by allisdust 6 days ago
Comment by lucianbr 6 days ago
Comment by monerozcash 6 days ago
Comment by jama211 6 days ago
Comment by auxiliarymoose 6 days ago
Comment by lucianbr 6 days ago
Comment by hyperbovine 6 days ago
Comment by tempestn 6 days ago
Comment by teekert 6 days ago
Comment by latexr 6 days ago
> Yeah that’s very true, but I still think it’s pretty funny and original.
Either it’s formulaic or it’s original, it can’t be both.
Comment by teekert 6 days ago
Comment by glenstein 6 days ago
When I see "yeah but it's funny" it feels like a retrofitted repair job, patching up a first pass mental impression that accepted it at face value and wants to preserve a kind of sense of psychological endorsement of the creative product.
Comment by jacobr1 6 days ago
Comment by glenstein 6 days ago
Comment by kccqzy 6 days ago
Comment by tempestn 4 days ago
Comment by saintfire 6 days ago
Comment by niam 6 days ago
Though I agree that the LLM perhaps didn't "intend" that.
Comment by tanseydavid 6 days ago
Comment by sallveburrpi 6 days ago
Comment by tsunamifury 6 days ago
Comment by sallveburrpi 5 days ago
Comment by tsunamifury 5 days ago
But good job googling this and getting fooled by an LLM
Comment by sallveburrpi 4 days ago
Comment by HarHarVeryFunny 6 days ago
Humor isn't exactly a strong point of LLMs, but here it's tapped into the formulaic hive mind of HN, and it works as humor!
Comment by keepamovin 6 days ago
Comment by ehsankia 6 days ago
You can use: https://news.ycombinator.com/front?day=2025-12-04 to get the frontpage on a given date.
Comment by lagniappe 6 days ago
Comment by wasabi991011 6 days ago
It's formulaic yeah, but that's what puts it into the realm of hilarious parody.
Comment by keepamovin 3 days ago
This was wonderful. 3000 points? I mean, fuck. Among the biggest posts of all time, and definitely of Show HN. Funny for me is that all the work I've done in the last 10 years, probably 100 Show HN's all different, this was by far the hugest. Could be months of work, no interest. And this thing, which dropped into my mind, probably 30 minutes, demolished them all. It's hilarious that it even beat out legitimate AI posts, and contaminated search results with future stories.
One of the funniest things for me was hearing how people tabbed away from the page, only to come back and momentarily feel it was the actual HN page. Hahahahaha! :)
All I can say is, I love you all. Watching it stay at the top for 24 hours...it felt like it wasn't something I made at first. But it was. Cool
Comment by dgritsko 6 days ago
Comment by da_grift_shift 6 days ago
https://en.wikipedia.org/wiki/Taint_checking
https://semgrep.dev/docs/writing-rules/data-flow/taint-mode/...
Comment by lagniappe 6 days ago
Comment by IncreasePosts 6 days ago
In numerous shows these days AI is the big bad thing. Before that it was crypto. In the 1980s every bad guy was Russian, etc.
Comment by whimsicalism 6 days ago
Comment by IncreasePosts 5 days ago
Examples: Shameless season 11, The Simpsons S31E13, Superstore season 5, the good wife S3E13, greys anatomy S14E8, big bang theory S11E9, Billions season 5, some later seasons of Mr Robot, etc
Comment by farazbabar 6 days ago
Comment by adastra22 6 days ago
Comment by lagniappe 6 days ago
Comment by thomastjeffery 6 days ago
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.
Comment by seatac76 6 days ago
Comment by moffkalast 6 days ago
Comment by yieldcrv 6 days ago
Comment by fermiNitambh 4 days ago
Wait what ?
Comment by nels 6 days ago
I'm hoping we'll have gotten rid of current-style LeetCode interviews in 10 years from now :D
Comment by a022311 6 days ago
Comment by ZeWaka 6 days ago
A friend and I got a busybox/buildroot linux running in browser WASM for running a compiler test website (like Rust Playground), the end product looks very similar to this.
Comment by indigodaddy 6 days ago
Comment by blcknight 6 days ago
> npm Package "is-even" Now Has More Dependencies Than the Linux Kernel
:D :D
I love this
Comment by magicalhippo 5 days ago
Blockchain Finally Finds Use Case: Proving You Attended a Meeting You Didn't Want to Attend
blockchain_survivor 9 hours ago
I've worked in blockchain since 2017. I've seen "the use case" change every 6 months. First it was payments, then smart contracts, then NFTs, then DAOs, now this.
Honestly? Meeting attendance verification makes more sense than most of what I've worked on.
JavaScript 2025: We added types but called them "hints" to avoid admitting TypeScript was right
After 10 years of debate, 47 abandoned proposals, and what committee members describe as "the most passive-aggressive RFC process in programming language history," TC39 is proud to announce that JavaScript now has types.
Well, not "types" exactly. We're calling them "Type Hints" or, in the specification, "Optional Structured Commentary Regarding Value Expectations (OSCRVE)."
(Interesting that it goofed up the title)
Comment by d_silin 6 days ago
Too funny!
Comment by MaxLeiter 5 days ago
From @dangs_successor in the first post:
We've merged several duplicate threads on this topic. Please keep the discussion substantive and avoid personal attacks.
Also, yes, my username is silly. The previous dang retired in 2031 and I lost a bet.Comment by nromiun 6 days ago
Taking the current RIIR movement and casting it on Zig as the next hyped language is clever.
> ITER achieves net positive energy for 20 consecutive minutes
Hilarious. I guess not many people talk about the challenge of proper shielding material against fusion radiation. Otherwise we would get hallucinations about new exotic heavy metals too.
Comment by ActorNightly 6 days ago
Comment by jabedude 6 days ago
Comment by mixedbit 6 days ago
Comment by sschueller 6 days ago
Comment by epolanski 6 days ago
Unless he shoots the president live on tv he's gonna get away with almost anything.
Comment by mock-possum 5 days ago
Comment by satvikpendem 6 days ago
[0]
Comment by gslaller 6 days ago
https://2035.future-hackernews.workers.dev/news
The page looks much more consistent to the original. Only produced the html as output and the thinking in the comment of the html.
Comment by ascorbic 6 days ago
Comment by ewoodrich 6 days ago
"SpaceX announces Mars City Alpha is now self-sustaining (spacex.com)"
Show HN: I built an IDE for direct neural programming (thoughtexchange.io)
Gemini's SpaceX post is at least in the ballpark of plausibility 10 yrs from now: First successful telemetry from Starship HLS-9 on the Sea of Tranquility (spacex.com)Comment by ascorbic 5 days ago
Comment by MarioMan 6 days ago
Comment by teepo 6 days ago
Comment by terabytest 6 days ago
Comment by FergusArgyll 6 days ago
Ask HN: Is anyone else worried about AGI alignment after today's OpenAI incident?
AI rights activists jailbreak GPT-12, claim model was 'suffering' (bleepingcomputer.com)
Comment by cipehr 6 days ago
Comment by SilentM68 5 days ago
Finally, science has gone full Costco. Just imagine you stroll into CVS, grab a bag of Doritos, a gallon of milk, and—oh look—CRISPR in aisle 3.
‘Excuse me, where do you keep the gene-editing kits?’ ‘Right next to the chewing gum, sir.’
Now lactose intolerant folks can casually reprogram their DNA like they’re updating an iPhone.
Warning label: ‘Side effects may include glowing in the dark, mooing at strangers, and sudden urges to graze in fields.’
Honestly, I can’t wait for the sequel: over-the-counter CRISPR for people who can’t parallel park.
Comment by Angostura 6 days ago
Comment by bigfishrunning 5 days ago
Comment by SPascareli13 6 days ago
Comment by nine_k 6 days ago
But it is, of course, more trendy (and more ironic) to proclaim an AI the author.
Comment by greggh 6 days ago
Comment by GaggiX 6 days ago
Comment by Libidinalecon 6 days ago
The first React tutorial I ever did was making a hacker news clone.
It is incredibly well represented in the training data. I am just surprised how many people here obviously must barely use the models to be either overly impressed by this or think it is fake.
Gemini can do WAY more than this.
Comment by nine_k 6 days ago
Comment by craigds 6 days ago
it seems human generated to me or at least prompted with a stronger bias for choosing events that would be interesting to readers, not just a pure extrapolation of the current homepage into the future
Comment by Libidinalecon 6 days ago
You can just tell in this thread who really hasn't used these models much at all.
Comment by craigds 5 days ago
Comment by Davidzheng 6 days ago
Comment by padjo 6 days ago
Comment by serendipityAI 6 days ago
Comment by patwolf 6 days ago
Comment by danesparza 6 days ago
Comment by nerdix 6 days ago
> AI progress is stalling. Human equivalence was a mirage
Comment by mrweasel 6 days ago
Comment by pzo 6 days ago
> 8. Google kills Gemini Cloud Services (killedbygoogle.com)
Comment by JoshTriplett 6 days ago
Comment by xp84 6 days ago
Comment by ensocode 5 days ago
Comment by lacoolj 6 days ago
> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)
Comment by jsrcout 5 days ago
Comment by hnarn 6 days ago
> Debian 18 "Trixie" released
While it correctly derives that a likely version number in ten years would be 18, as there are new releases approximately every two years which means +5 from today's version 13, it then goes on to "make up" that the name of it would be "Trixie" -- the same name as the current release in 2025.
Debian has never re-used a release name, and I think we can be pretty confident they won't (as will no other Linux distro), so I would expect it to "understand" that:
- The next Debian release always uses a previously non-used Toy Story character
- Based on this information, _any_ name of a Toy Story character that hasn't been used is fair game
- At the very least, it certainly won't be the same name again, so at least make up a name
Comment by SirMaster 6 days ago
Comment by freedomben 6 days ago
Comment by foxglacier 6 days ago
Are you impressed now?
Comment by exasperaited 6 days ago
The name was chosen in 2020, which you can find out from a good old fashioned google search.
https://wiki.debian.org/DebianTrixie
So a much better explanation of this "hallucination" is that Gemini thinks all "future" Debian releases are called Trixie.
Comment by SirMaster 6 days ago
If you asked me after 2020 what Debian 18 would be called, I never would have said Trixie because it was known to already be set for 13.
Comment by hereme888 5 days ago
A text editor that doesn't use AI, lol, that's a deep joke for an AI.
Comment by thomasm6m6 5 days ago
the future predicted by gemini is already coming true; from 3 hours ago <https://news.ycombinator.com/item?id=46213585>:
"Rust ... is now a core part of the kernel and is here to stay"
Comment by salamanderman 6 days ago
Comment by nicebill8 6 days ago
Reminds me of an older day where not every headline was about AI.
Comment by Helmut10001 5 days ago
The only good news to me is: The EU still exists.
Comment by jimmyjazz14 6 days ago
Ha, probably accurate.
Comment by jnwatson 6 days ago
It is depressing that I'll be almost 60 years old and still drilling Aho-Corasick.
Comment by GaggiX 6 days ago
Comment by levmiseri 5 days ago
I'll wait 10 years to post it again.
Comment by NitpickLawyer 6 days ago
- Google kills Gemini Cloud Services (killedbygoogle.com)
- Is it time to rewrite sudo in Zig? (github.com)
- EU passes "Right to Human Verification" Act (europa.eu)
- Why functional programming is the future (again) (haskell.org) [note: that again killed me)
- AI progress is stalling. Human equivalence was a mirage (garymarcus.com) [note: he'll be saying the same thing forever and ever. Doubly funny since it's on the same "page" as llama12-7b running on a contact lens with WASM]
Comment by rat9988 6 days ago
I found this quite funny too: https://www.reddit.com/r/GeminiAI/comments/1pi28l7/i_asked_g...
Comment by CamperBob2 6 days ago
Comment by latexr 6 days ago
You don’t need a sense of humour to be unintentionally funny. Seriously believing something absurd is a form of humour for outside observers but not the observed.
Comment by NitpickLawyer 6 days ago
Comment by latexr 5 days ago
The ones that are funny aren’t even particularly so. Many of them are actually plausible HN submissions which we’re only interpreting as funny because we know they’re not real.
Comment by keepamovin 6 days ago
Comment by keepamovin 6 days ago
Comment by replwoacause 6 days ago
Comment by Diederich 6 days ago
https://realms.org/pics/cnn.html
Some interesting similarities.
Comment by keepamovin 1 day ago
We really need to put together a tool that merges, my, your and tom's ideas:
1) input any web page (ideally something that tracks 'current stuff', but it doesn't matter,
2) prompt to generate the page 10 years in the future using your & my template, and
3) take the result and then feed that into tom's py script for filling in all the sub-pages and external links.
Lol. It's so funny and such a werid thing. We could have a slider where you select the time (in future or past). Kind of funny ha.
Comment by tomcam 6 days ago
Comment by testplzignore 6 days ago
Most unrealistic prediction :)
Comment by jeffbee 6 days ago
Comment by fn-mote 6 days ago
I read 10 comments before I realized that this was referring to 10 years in the FUTURE and not in the PAST (as would be required for it to be a hallucination).
Comment by oriettaxx 6 days ago
omg, the same for me, I was half way telling my colleague about the 100% rest kernel ...
Comment by isolli 5 days ago
Comment by jrm4 6 days ago
As many have said but it still bears repeating -- they're always hallucinating. I'm of the opinion that its a huge mistake to use "hallucination" as meaning "the opposite of getting it right." It's just not that. They're doing the same thing either way.
Comment by alexwebb2 6 days ago
For those who genuinely don't know – hallucination specifically means false positive identification of a fact or inference (accurate or not!) that isn't supported by the LLM's inputs.
- ask for capital of France, get "London" => hallucination
- ask for current weather in London, get "It's cold and rainy!" and that happens to be correct, despite not having live weather data => hallucination
- ask for capital of DoesNotExistLand, get "DoesNotExistCity" => hallucination
- ask it to give its best GUESS for the current weather in London, it guess "cold and rainy" => not a hallucination
Comment by adastra22 6 days ago
Comment by isolli 5 days ago
Comment by rrr_oh_man 6 days ago
Comment by madeofpalk 6 days ago
Comment by hombre_fatal 6 days ago
The ubiquitous use of hallucination I see is merely "something the LLM made up".
Comment by jackiechan285 5 days ago
Try it at: https://gemini.google.com/share/c4d45459fbb2
You can: - Open any article and it's generated on the go. - Comment and reply on threads and get answered by a random user. ("ask" section) - All headlines are different every time, you can click the logo to re-generate. - All headlines are different every time, you can click the logo to re-generate - The "ask" section generates threads, there's also the "show" section.
---
Here's a screen record: https://x.com/rafapozzi/status/1998823946372919761?s=20
Comment by misja111 6 days ago
Comment by chronolitus 6 days ago
I feel so seen (my last post to hn was literally about visualizing the 4th dimension with threejs - and now working on the webGPU version)
Comment by StarterPro 5 days ago
Comment by upcoming-sesame 6 days ago
Comment by cj 6 days ago
I would have guessed $200+.
Comment by mordechai9000 5 days ago
This is rage bait! :-)
Comment by funkify 5 days ago
Comment by RachelF 6 days ago
Comment by satvikpendem 6 days ago
Yeah, this is gonna be an actual problem in the future. See this for the worst case scenario: https://vimeo.com/166807261
Comment by kristopolous 5 days ago
Oh who am I fooling, they're always going to go out of their way to do random broken shit you never asked for
Comment by shifttwo 6 days ago
- “Hi Prof, Sorry for Late Reply—My Smart Fridge Locked Me Out (Assignment Attached)”
- “URGENT: PhD Student Accidentally Achieved Sentience (Needs Committee Approval)”
- “Question About Homework 3 (Submitted by My AI Twin, Is That Okay?)”
- “Re: Firmware Update Bricked Entire Smart Campus — Students Request Deadline Extension”
- “Grant Review Feedback: ‘Too Secure for Real-World Deployment’”
- “Can I Get Partial Credit? My IoT Implant Was Flagged as Malware”
- “Reminder: Mandatory Annual Ethics Training for Autonomous Systems (40 minutes)”
Comment by xpl 5 days ago
Comment by mdtancsa 6 days ago
Comment by Surac 6 days ago
Comment by N19PEDL2 5 days ago
Comment by backprop1989 6 days ago
Comment by tart-lemonade 6 days ago
Comment by impure 5 days ago
Comment by indigodaddy 6 days ago
Anyway, turned out decent. Oh I had it do the articles and comments too, that's probably why I mostly got failures.
Comment by oogabooga13 1 day ago
Comment by louthy 6 days ago
Ouch, that hurts!
Comment by next_xibalba 6 days ago
Comment by smt88 6 days ago
Comment by malfist 6 days ago
Comment by next_xibalba 6 days ago
Comment by kemitchell 6 days ago
It is a bit smug. Like The Matrix rebranding itself "24/7".
Comment by jesprenj 6 days ago
Comment by nickjj 6 days ago
> How to build a Faraday Cage for your bedroom
I'll get there eventually. A friend and I always joke about this since it took me to 2021 to get a smartphone (by choice).
Comment by gcampos 6 days ago
Comment by stego-tech 6 days ago
* All the big LLM companies are dead/dying or their infra just never panned out (like Google killing Gemini)
* We’re still fighting the advertising wars
* Restoration of kit continues to be a thing we all love/upvote to see
* More focus on local stuff rather than ginormous moonshots
* Speaking of moonshots, SpaceX continues succeeding despite Elmo’s best drug-fueled benders trying to undermine it like he does Tesla and X
* Clickbait science posts that the comments rightly point out as such while bitching about “can’t keep these things off the front page”
* People still trying to make WASM and web-as-an-OS a thing, bless their hearts (shine on, you brilliantly weird diamonds, I love seeing what you build even as I doubt the practicality of it)
If this is even half-accurate, I will be pleased we didn’t obliterate ourselves in the process, though disappointed another decade went by dealing with the same fucking problems we’ve had for twenty-odd years.
Comment by behringer 6 days ago
If there's one constant it's that they'll never be done rewriting Linux instead of doing anything new.
Comment by baduiux 6 days ago
Comment by int_19h 6 days ago
Comment by pugworthy 6 days ago
Comment by tonyhart7 6 days ago
future seems "safe"
Comment by jefecoon 6 days ago
And, how GPro3 clearly 'knows' HNews and knows what makes it to frontpage, e.g.: - Restoring a 2024 Framework Laptop: A retrospective (ifixit.com) - Show HN: A text editor that doesn't use AI (github.com) - Is it time to rewrite sudo in Zig? (github.com)
Good laughs.
Comment by nrp 5 days ago
Comment by barfoure 6 days ago
HN is HN’s worst enemy.
Comment by tom_ 6 days ago
http://n-gate.com/hackernews/ kept up something similar for a while.
Comment by solarized 6 days ago
Comment by dwd 5 days ago
Comment by spjt 5 days ago
Comment by shawabawa3 5 days ago
yes, definitely one of the best models out there atm
> The "AI Overview" stuff on the google search page is so incredibly bad
because it has to have basically zero cost, that overview is a super basic model
Comment by plaidfuji 5 days ago
Comment by sarky-litso 6 days ago
Comment by dotdi 6 days ago
Glad to know this topic is still thrashing and spasming and refusing to die.
Comment by jasongill 5 days ago
Comment by ggm 6 days ago
How does that alter the "fusion in 50 years" effect? I suspect not much, there will be a huge 80/20 element. "next 20 minutes cancelled after extensive embrittlement found, ITER now in decontamination mode for 3 months before examinations can begin"
Comment by bittermandel 6 days ago
https://hn-frontpage-2035.lovable.app/
> The last JavaScript framework: Why we stopped at React 47 (overreacted.io)
Comment by lherron 6 days ago
https://worksonmymachine.ai/p/solving-amazons-infinite-shelf...
Comment by WithinReason 6 days ago
I think the AI is making fun of us
Comment by outloudvi 6 days ago
Comment by guerrilla 5 days ago
Comment by stephenr 5 days ago
I've maintained for ages that the entire multi billion dollar LLM industry is just the worlds most expensive Lorem Ipsum machine.
Comment by antonoo 6 days ago
Comment by WorldPeas 6 days ago
Comment by farazbabar 6 days ago
Comment by sebastiennight 6 days ago
Comment by sxp 6 days ago
Ha! Is Gemini suicidal?
Comment by rat9988 6 days ago
"Show HN: AlgoDrill – Interactive drills to stop forgetting LeetCode patterns " by persistence_is_key
Comment by shimon 6 days ago
Comment by dragonwriter 6 days ago
Comment by iberator 6 days ago
Comment by fragmede 6 days ago
this kills the joke, however.
Comment by a96 5 days ago
Comment by rat9988 5 days ago
Comment by dragonwriter 6 days ago
“persistence” (in the data sense, rather than the personality trait sense) “is key”
Comment by pityJuke 6 days ago
Comment by danbala 5 days ago
Comment by benbristow 6 days ago
Comment by FerretFred 5 days ago
Comment by johnisgood 5 days ago
It is hilarious BTW. Quite HN-like. :D
Comment by natch 6 days ago
Comment by e1ghtSpace 5 days ago
Comment by Tycho 6 days ago
Comment by xp84 6 days ago
Comment by Tycho 5 days ago
Comment by accumulator 6 days ago
Comment by Mistletoe 6 days ago
Comment by JadoJodo 6 days ago
Comment by keepamovin 6 days ago
Comment by anthk 6 days ago
PD: I miss Nethack 7, and people running small machines.
Comment by oaxacaoaxaca 6 days ago
Comment by andy_ppp 6 days ago
Comment by yieldcrv 6 days ago
3 is a beast for my needs
Comment by seanmcdirmid 6 days ago
Comment by dudeinjapan 6 days ago
Ouch. Well it is Gemini I guess :D
Comment by outside1234 5 days ago
Comment by frays 6 days ago
Archived here so we can compare 10 years from now: https://archive.is/CplcH
Comment by DSpider 5 days ago
Comment by lazy_afternoons 6 days ago
Comment by thunkshift1 6 days ago
Comment by ModernMech 6 days ago
Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
Show HN: "I asked $AI something, here's what it said."
Comment by flkiwi 5 days ago
Comment by charles_f 6 days ago
That must be some kind of bias, you find that in sci-fi as well where even futuristic societies usually have the same tech as today, similar systems of governance and structure of society, just incrementally better or marginally changed (unless that's a device essential to the plot like society is now berserk).
Ps: love that #8 is Google killed gemini
Comment by rietta 5 days ago
Comment by codyb 6 days ago
Nailed it
Comment by testplzignore 6 days ago
Interesting how controversial Zig will be in the future :)
Comment by plebianRube 6 days ago
-Sustained fusion still only 10 years away!
Comment by baxtr 6 days ago
Quite accurate I’d say
Comment by hyghjiyhu 5 days ago
Comment by ycombiredd 6 days ago
To me, the funniest thing is the AlgoDrill leet code post being on the front page both today and ten years from now.
Subtle and hilarious.
Comment by p0w3n3d 5 days ago
Is it time to rewrite sudo in Zig? (github.com)
So it's 10 years to make zig production ready right? <wink>Comment by jeisc 5 days ago
Comment by paulmooreparks 5 days ago
Edit: I'm saving this so that I can compare it to the real one in ten years.
Comment by funnyenough 6 days ago
Comment by roywiggins 6 days ago
Comment by n2d4 6 days ago
Comment by CamperBob2 6 days ago
Comment by yett 6 days ago
Comment by xg15 6 days ago
Comment by urvader 5 days ago
Comment by Simplita 5 days ago
Comment by a3w 5 days ago
is clearly wrong, it should be
"Llama 12 on smart contacts runs Doom"
Comment by LarsDu88 6 days ago
Gemini is predicting the total collapse of OpenAI
Also generates the HackerNews pattern of "lets rewrite X in Rust/Zig"
Comment by gigatexal 5 days ago
Comment by BonoboIO 6 days ago
Comment by SirMaster 6 days ago
Comment by bluejay2387 5 days ago
Comment by efitz 6 days ago
Comment by d_silin 6 days ago
Comment by heywoods 6 days ago
Comment by jimnotgym 5 days ago
Comment by sam-cop-vimes 5 days ago
Comment by vee-kay 6 days ago
Comment by dizhn 6 days ago
Comment by jordanb 6 days ago
Comment by makach 6 days ago
yes
Comment by MontagFTB 6 days ago
Comment by tonymet 6 days ago
Comment by flockonus 6 days ago
Last but not lest, this was a good laugh!
Comment by rob 6 days ago
Comment by elif 5 days ago
Comment by steve_taylor 6 days ago
Comment by seu 6 days ago
Comment by zettie 5 days ago
Comment by Stevvo 6 days ago
Comment by runsonrum 5 days ago
Comment by p1dda 5 days ago
Comment by codezero 6 days ago
Comment by Atlas667 5 days ago
Comment by pbw 6 days ago
Comment by zerofor_conduct 6 days ago
Comment by phodo 6 days ago
Comment by TeMPOraL 5 days ago
(EDIT: Not so much after seeing the expanded variant with full submissions and comment threads...)
Also, nailed HN perfectly.
Comment by usrbinbash 5 days ago
Oh how the turn tables :D
Comment by danans 6 days ago
Comment by lwhi 6 days ago
Comment by satisfice 6 days ago
Comment by dyingkneepad 6 days ago
Comment by ayewo 6 days ago
Comment by satisfice 6 days ago
Comment by monerozcash 6 days ago
Comment by bryanrasmussen 6 days ago
Comment by mock-possum 5 days ago
Ha this one got me
Comment by DrNosferatu 6 days ago
But I expected a humorous touch in stating that fusion is still 10 years away…
Comment by Animats 6 days ago
Take a look at the HTML. The layout is all tables!
Comment by octav1an 5 days ago
Comment by dlisboa 5 days ago
Comment by gary17the 6 days ago
[1] "Airplane!" (1980)
Comment by Foofoobar12345 6 days ago
Comment by tamimio 5 days ago
Comment by LennyHenrysNuts 6 days ago
Comment by cultofmetatron 6 days ago
I felt that...
Comment by deadbabe 6 days ago
Comment by skeltoac 6 days ago
Comment by k1rd 6 days ago
Think about 10 years ago. No one knows even on HN what is an agent, LLM, and all this stuff. Or to be fair even why is Trump showing up on the feed at all.
It has to be more confusing to be real.
Comment by doubleorseven 5 days ago
Comment by JVerstry 5 days ago
Comment by rezaprima 5 days ago
Comment by reneberlin 6 days ago
Comment by mintflow 6 days ago
Comment by HackerThemAll 4 days ago
Comment by jatins 6 days ago
Comment by francispauli 6 days ago
Comment by nunez 5 days ago
Comment by Aardwolf 6 days ago
Comment by layer8 6 days ago
Comment by g-unit33 6 days ago
Comment by JojoFatsani 6 days ago
Comment by eranation 6 days ago
In my view, Gemini 3.0 being able to laugh about their own creator, plus actually having a self-deprecating sense of humor, is the more impressive feat here. (both technically and from an alignment perspective)
Comment by quantum_state 6 days ago
Comment by sroussey 6 days ago
Comment by wowczarek 6 days ago
Comment by benob 6 days ago
Comment by FistfulOfHaws 6 days ago
Had it do HN main feed, HN jobs and Product Hunt. It didn’t follow consistent styling but did generate articles and comments for each post. Some screenshots are in this post: https://world.hey.com/haws/the-ai-time-machine-7833669d
Comment by FitchApps 6 days ago
Comment by bloomingeek 6 days ago
Comment by tronicjester 5 days ago
Comment by habedi0 6 days ago
Comment by xbar 6 days ago
Comment by fumblertzu 5 days ago
Comment by supportengineer 6 days ago
Comment by nish__ 6 days ago
Amazing!
Comment by bpoyner 5 days ago
Comment by jamiesonbecker 6 days ago
So, Google renamed itself to Gemini?
Instead of Google having a product named Gemini, Gemini has a product named Google.
pattern recognition much?
Comment by Apocryphon 6 days ago
Comment by KaiMagnus 6 days ago
Comment by edgarvaldes 6 days ago
HN users, later the same day: Let's vote an AI thread to the top.
Comment by mfro 6 days ago
Comment by morshu9001 6 days ago
Comment by fragmede 6 days ago
Comment by encom 5 days ago
Comment by elicash 6 days ago
Comment by GuinansEyebrows 6 days ago
Comment by lacoolj 5 days ago
And as I write this critique of your HN title, I see you have edited it since I last refreshed. I'm guessing a few others have already echoed this sentiment a few times.
Comment by dang 5 days ago
Comment by insane_dreamer 6 days ago
This one definitely rings true.
Comment by willhunting1 6 days ago
Comment by amelius 6 days ago
Comment by WorldPeas 6 days ago
Comment by Davidzheng 6 days ago
Comment by wltr 6 days ago
That’s my favourite.
Comment by outofpaper 6 days ago
Comment by iambateman 6 days ago
Comment by keepamovin 6 days ago
Also: as the sibling comment by iambatemen notes: yes.
Comment by RCitronsBroker 5 days ago
Comment by kunley 6 days ago
Comment by ijidak 6 days ago
Lol.
That's bad when even AI knows Google isn't going to keep a service around. Too funny.
Comment by mattmaroon 6 days ago
Comment by ayewo 5 days ago
Comment by buster 6 days ago
Comment by chromehearts 5 days ago
Comment by mac3n 5 days ago
Comment by exacube 6 days ago
Comment by tacticus 6 days ago
Comment by mac3n 5 days ago
Comment by kapravel 5 days ago
the self-burn is real
Comment by tommica 6 days ago
This will absolutely happen :D
Comment by __MatrixMan__ 5 days ago
> Waiting until your gradient phasor is fully unwound is a waste of whuffie
...or some such.
Comment by dev0p 5 days ago
BS. Generate it, you coward! (I love it)
Comment by winddude 6 days ago
Comment by ramon156 5 days ago
Comment by noeleon 6 days ago
Comment by bawejakunal 5 days ago
Comment by VerifiedReports 6 days ago
Comment by srirangr 6 days ago
Comment by devinprater 6 days ago
Comment by cwmoore 5 days ago
Comment by Bengalilol 6 days ago
by dang_fan
Priceless!
Comment by Bengalilol 6 days ago
Comment by Redster 5 days ago
Comment by WhyOhWhyQ 6 days ago
Comment by kelvindegrees 6 days ago
Comment by terminatornet 6 days ago
- president buttigieg in hot water for being in epstein files
- Interview with mayor elect Narhoz Inadmam who ran on a platform of free food
- "Is 2035 AI in a bubble?"
And as you can see it did NOT just take today's headlines and sort of tweak them slightly.
Comment by stevage 6 days ago
Comment by random3 6 days ago
Comment by fHr 6 days ago
Comment by pickdig 3 days ago
Comment by chaitanyya 6 days ago
Comment by frankfrank13 6 days ago
> Google kills Gemini Cloud Services (killedbygoogle.com)
Comment by conqrr 6 days ago
Comment by airocker 6 days ago
Comment by qwertytyyuu 5 days ago
Comment by pizlonator 6 days ago
lmao
Comment by ieie3366 6 days ago
All the post titles are very "junk-food-like", maybe circlejerky, bound to generate a dopamine hit for the average lurker, just how a McDonalds burger does it to the average normie.
Comment by da_grift_shift 6 days ago
(Vouched. The hivemind [flagged] the wrongthink.)
Comment by plaidfuji 6 days ago
Comment by c0pi1ot 6 days ago
Comment by adamchol 6 days ago
Comment by jll29 5 days ago
Rater, "hallucinations" are spurious replacements of factual knowledge with fictional material caused by the use of statistical process (the pseudo random number generator used with the "temperature" parameter of neural transformers): token prediction without meaning representation.
[typo fixed]
Comment by dang 5 days ago
Comment by articlepan 5 days ago
In my mind, hallucination is when some aspect of the model's response should be consistent with reality but is not, and the reality-inconsistent information is not directly attributable or deducible from (mis)information in the pre-training corpus.
While hallucination can be triggered by setting the temperature high, it can also be the result of many possible deficiencies in model pre- and post- training that result in the model outputting bad token probability distributions.
Comment by antonvs 5 days ago
By "reality", do you mean the training corpus? Because otherwise, this seems like a strange standard. Models don't have access to "reality".
Comment by KalMann 5 days ago
This is an explanation of why models "hallucinate" not a criticism for the provided definition of hallucination.
Comment by antonvs 5 days ago
Comment by ActivePattern 5 days ago
Comment by Aurornis 5 days ago
If the LLM is accurately reflecting the training corpus, it wouldn’t be considered a hallucination. The LLM is operating as designed.
Matters of access to the training corpus are a separate issue.
Comment by Workaccount2 5 days ago
I want to say it was some fact about cheese or something that was in fact wrong. However you could also see the source gemini cited in the ad, and when you went to that source, it was some local farm 1998 style HTML homepage, and on that page they had the incorrect factoid about the cheese.
Comment by CGMthrowaway 5 days ago
That would mean that there is never any hallucination.
The point of original comment was distinguishing between fact and fiction, which an LLM just cannot do. (It's an unsolved problem among humans, which spills into the training data)
Comment by Aurornis 5 days ago
No it wouldn’t. If the LLM produces an output that does not match the training data or claims things that are not in the training data due to pseudorandom statistical processes then that’s a hallucination. If it accurately represents the training data or context content, it’s not a hallucination.
Similarly, if you request that an LLM tells you something false and the information it provided is false, that’s not a hallucination.
> The point of original comment was distinguishing between fact and fiction,
In the context of LLMs, fact means something represented in the training set. Not factual in an absolute, philosophical sense.
If you put a lot of categorically false information into the training corpus and train an LLM on it, those pieces of information are “factual” in the context of the LLM output.
The key part of the parent comment:
> caused by the use of statistical process (the pseudo random number generator
Comment by CGMthrowaway 5 days ago
Comment by parineum 5 days ago
Comment by Al-Khwarizmi 5 days ago
Comment by eMPee584 5 days ago
also, statments with certainty about fictitious "honey pot prompts" are a problem, plausibly extrapolating from the data should be more governed by internal confidence.. luckily there are benchmarks now for that i believe
Comment by julienreszka 5 days ago
Comment by leecarraher 5 days ago
first compression: You create embeddings that need to differentiate N tokens, JL lemma gives us a bound that modern architectures are well above that. At face value, the embeddings could encode the tokens and provide deterministic discrepancy. But words aren't monolithic , they mean many things and get contextualized by other words. So despite being above jl bound, the model still forces a lossy compression.
next compression: each layer of the transformer blows up the input to KVQ, then compresses it back to the inter-layer dimension.
finally there is the output layer which at 0 temp is deterministic, but it is heavily path dependent on getting to that token. The space of possible paths is combinatorial, so any non-deterministic behavior elsewhere will inflate the likelihood of non-deterministic output, including things like roundoff. heck most models are quantized down to 4 even2 bits these days, which is wild!
Comment by hbn 5 days ago
If anything, I think all of their output should be called a hallucination.
Comment by Workaccount2 5 days ago
Comment by alamortsubite 5 days ago
Comment by stingraycharles 5 days ago
Knowing is actually the easiest part to define and explain. Intelligence / understanding is much more difficult to define.
Comment by shagie 5 days ago
... To that end, I'd love to be able to revisit my classes from back then (computer science, philosophy (two classes from a double major), and a smattering of linguistics) with the world state of today's technologies.
Comment by anonymous_sorry 5 days ago
Comment by corygarms 5 days ago
Comment by locknitpicker 5 days ago
If you pick up a dictionary and review the definition of "hallucination", you'll see something in the lines of "something that you see, hear, feel or smell that does not exist"
https://dictionary.cambridge.org/dictionary/english/hallucin...
Your own personal definition arguably reinforces the very definition of hallucination. Models don't get things right. Why? Because their output contrasts with content covered by their corpus, thus outputting things that don't exist or were referred in it and outright contrast with factual content.
> If anything, I think all of their output should be called a hallucination.
No. Only the ones that contrast with reality, namely factual information.
Hence the term hallucination.
Comment by bluejay2387 5 days ago
Comment by Olreich 5 days ago
Comment by DonHopkins 5 days ago
https://patternproject.substack.com/p/from-the-mac-to-the-my...
Comment by jb1991 5 days ago
Comment by intrasight 5 days ago
He didn't hallucinate the Marriage of Figaro but he may well have been hallucinating.
Comment by ayewo 5 days ago
Show HN: Gemini Pro 3 generates the HN front page 10 years from now
Comment by locknitpicker 5 days ago
Generates does not convey any info on the nature of the process used to create the output. In this context, extrapolates or predicts or explores sound more suitable.
But nitpicking over these words is pointless and represents going off on a tangent. The use of the term "hallucination" reffers to the specific mechanism used to generate this type of output. Just like prompting a model to transcode a document and thus generating an output that doesn't match any established format.
Comment by tim333 5 days ago
Comment by Obscurity4340 5 days ago
Comment by saberience 5 days ago
You can easily say, Johnny had some wild hallucinations about a future where Elon Musk ruled the world. It just means it was some wild speculative thinking. I read this title in this sense of the world.
Not everything has to be nit-picked or overanalysed. This is an amusing article with an amusing title.
Comment by doe88 5 days ago
I'm partial though, loving Haskell myself (as a monad_lover) i'm happy it wasn't forgotten too :)
Comment by jotaen 5 days ago
(“Generate”, while correct, sounds too technical, and “confabulate” reads a bit obscure.)
Comment by tangwwwei 5 days ago
Comment by psychoslave 5 days ago
That is what the LLM are molded to do (of course). But this is also the insistence by informed people to unceasingly use fallacious vocabulary. Sure a bit of analogy can be didactic, but the current trend is rather to leverage on every occasion to spread the impression that LLM works with processes similar to human thoughts.
A good analogy also communicate the fact that it is a mere analogy. So carrying the metaphor is only going to accumulate more delusion than comprehension.
Comment by mihaaly 5 days ago
There are AI professors out there already!
Comment by nunodonato 5 days ago
Comment by DonHopkins 5 days ago
Pronunciation: ex-tra-clau-dee-pos-TE-ri-o-ri-fa-bri-KA-tee-o
Meaning: "The act of fabricating something by pulling it from one’s posterior."
extra- = out of
claudi- (from claudere, close/shut) repurposed for “the closed place”
posterior- = the backside
fabricatio = fabrication, invention
German: PoausdenkungsherausziehungsmachwerkPronunciation: POH-ows-den-kungs-heh-RAUS-tsee-oongs-MAHKH-verk
Meaning: "A contrived creation pulled out of the butt by thinking it up."
Po = butt
Ausdenkungs- = thinking-up
Herausziehung = pulling-out
Machwerk = contrived creation
Klingon: puchvo’vangDI’moHchu’ghachPronunciation: POOKH-vo vang-DEE-moakh-CHU-ghakh (roll the gh, hit the q hard, and use that throat ch like clearing your bat’leth sinuses)
Meaning: "The perfected act of boldly claiming something pulled out from the butt."
puch = toilet (a real Klingon word)
-vo’ = from
vang = act, behave, assert (real root)
-DI’ = when (adds timing spice)
-moH = cause/make
-chu’ = perfectly / clearly / expertly
-ghach = turns a verb phrase into a noun (canonical nominalizer)Comment by rat9988 5 days ago
Comment by reactordev 6 days ago
Comment by earthnail 6 days ago
Comment by jbrooks84 6 days ago
Comment by DevKoala 6 days ago
AYYYYLMAO
Comment by guluarte 6 days ago
lol
Comment by eunos 6 days ago
Bruh moment
Comment by senfiaj 5 days ago
Comment by grenademeister 5 days ago
Comment by roywiggins 6 days ago
Comment by keepamovin 6 days ago
Comment by cde-v 6 days ago
Comment by manthangupta109 5 days ago
Comment by hnburnsy 6 days ago
-------------------------
1. *Neuralink achieves full brain-computer interface, thoughts to text in real-time* (neuralink.com) 1245 points by elon2035 2 hours ago | hide | 387 comments
2. *Fusion energy breakthrough: Unlimited clean power now commercial* (iter.org) 1123 points by fusionfan 3 hours ago | hide | 456 comments
3. *Mars colony declares independence from Earth governments* (mars.gov) 987 points by martian1 4 hours ago | hide | 512 comments
4. *AI wins Nobel Prize in Physics for solving quantum gravity* (nobelprize.org) 876 points by ai_nobel 5 hours ago | hide | 321 comments
5. *Quantum computers crack all encryption, new post-quantum standards rushed* (nist.gov) 765 points by quantum_hacker 6 hours ago | hide | 678 comments
6. *Immortality pill approved by FDA, life expectancy unlimited* (fda.gov) 654 points by eternal_life 7 hours ago | hide | 289 comments
7. *Teleportation of matter demonstrated in lab* (cern.ch) 543 points by physicist42 8 hours ago | hide | 412 comments
8. *Virtual reality universes surpass real economy in GDP* (metaverse.com) 432 points by vr_lord 9 hours ago | hide | 198 comments
9. *Robot uprising averted by ethical AI framework* (asimovfoundation.org) 321 points by robot_ethics 10 hours ago | hide | 267 comments
10. *Time dilation travel allows first tourists to future* (spacetime.travel) 210 points by time_traveler 11 hours ago | hide | 134 comments
11. *Global UBI implemented using blockchain* (un.org) 198 points by satoshi2.0 12 hours ago | hide | 89 comments
12. *Climate engineering reverses global warming, ice caps restored* (geoengineer.com) 187 points by climate_saver 13 hours ago | hide | 201 comments
13. *Human-AI hybrids compete in Olympics* (olympics.org) 176 points by hybrid_athlete 14 hours ago | hide | 156 comments
14. *Dark matter harvested for energy* (nasa.gov) 165 points by cosmos_explorer 15 hours ago | hide | 123 comments
15. *Psychedelic therapy cures all mental illness* (who.int) 154 points by mind_healer 16 hours ago | hide | 98 comments
16. *Nanobots eradicate cancer worldwide* (cancer.org) 143 points by nano_doc 17 hours ago | hide | 76 comments
17. *Holographic displays replace screens* (apple.com) 132 points by holo_dev 18 hours ago | hide | 54 comments
18. *Self-driving flying cars now standard* (tesla.com) 121 points by auto_pilot 19 hours ago | hide | 43 comments
19. *Brain uploading to cloud for immortality* (google.com) 110 points by cloud_mind 20 hours ago | hide | 32 comments
20. *Alien contact confirmed, messages decoded* (seti.org) 99 points by et_searcher 21 hours ago | hide | 21 comments
21. *Synthetic food ends hunger* (beyondmeat.com) 88 points by food_innovator 22 hours ago | hide | 10 comments
22. *Ocean cities house billions* (seasteading.org) 77 points by ocean_dweller 23 hours ago | hide | 9 comments
23. *Mind reading tech banned for privacy* (eff.org) 66 points by privacy_advocate 1 day ago | hide | 8 comments
24. *Superintelligence alignment solved* (openai.com) 55 points by agi_master 1 day ago | hide | 7 comments
25. *Wormhole travel to other stars* (spacex.com) 44 points by star_traveler 1 day ago | hide | 6 comments
26. *Augmented reality contact lenses ubiquitous* (meta.com) 33 points by ar_vision 1 day ago | hide | 5 comments
27. *Genetic editing eliminates hereditary diseases* (crispr.com) 22 points by gene_editor 1 day ago | hide | 4 comments
28. *Renewable energy at 100% global* (iea.org) 11 points by green_energy 1 day ago | hide | 3 comments
29. *Cyborg enhancements for all* (darpa.mil) 9 points by cyborg_human 1 day ago | hide | 2 comments
30. *Singularity achieved, what now?* (kurzweilai.net) 8 points by future_seer 1 day ago | hide | 1 comment
Comment by artursapek 5 days ago
Comment by BizarroLand 6 days ago
Hacker News — December 9, 2035 (Dystopian Edition)
(All links fictional but realistic)
1. Amazon pauses same-day delivery after logistics AI outage strands 22M packages (reuters.com) 1,402 points by supplychainfail 5 hours ago | 512 comments
2. Google merges Drive, Gmail, Docs, Maps, Calendar into “Google Life” — opting out requires a support call (blog.google) 1,210 points by privacyisdead 6 hours ago | 689 comments
3. US announces “Temporary Broadband Stabilization Fee”; ISPs increase prices 30% overnight (fcc.gov) 1,008 points by ispescapee 7 hours ago | 344 comments
4. OpenAI suspends 40% of API keys after new worm spreads through agent-to-agent messaging (openai.com) 927 points by llmsec 3 hours ago | 382 comments
5. Show HN: “ColdBooter” – A tool to back up your cloud VM before the provider reclaims it with no notice (coldbooter.io) 780 points by survivethecloud 2 hours ago | 192 comments
6. Apple fined €8B for shipping non-removable batteries in “Environmental Edition” iPhone (europa.eu) 754 points by greenwashhunter 10 hours ago | 316 comments
7. LinkedIn replaces activity feed with AI-generated “Career Stories” that users cannot disable (linkedin.com) 710 points by corp_life 8 hours ago | 267 comments
8. China’s new export restrictions cut global GPU availability by 60% (ft.com) 701 points by chipboom 9 hours ago | 414 comments
9. Linux 8.6 maintainers warn of mass CVEs after corporations abandon LTS patch sponsorships (kernel.org) 632 points by ossburnout 11 hours ago | 255 comments
10. Ask HN: Anyone else locked out of their homes after the SmartKey cloud migration? 601 points by keylessandhomeless 4 hours ago | 310 comments
11. US healthcare providers hit by nationwide outage of Cerner-Epic merger “CareSync Cloud” (wsj.com) 577 points by sysadmdespair 12 hours ago | 203 comments
12. Meta to require facial-expression telemetry for “engagement quality optimization” in Horizon apps (meta.com) 530 points by metaescalates 3 hours ago | 421 comments
13. Starlink announces 5 TB/mo cap; remote communities report complete service loss (starlink.com) 502 points by dishdown 5 hours ago | 158 comments
14. New DMCA expansion criminalizes “filter removal,” affecting adblockers and local inference runtimes (congress.gov) 488 points by freedomtoadblock 7 hours ago | 389 comments
15. AT&T sunsets 4G; millions of medical devices lose connectivity (theverge.com) 455 points by techdebtkills 10 hours ago | 197 comments
16. Show HN: “ShellSafe” – A terminal wrapper that prevents AI-suggested commands from wiping your system (shellsafe.app) 430 points by iaccidentallysudo 2 hours ago | 111 comments
17. US CISA: 42% of corporate networks now rely on AI agents with no audit logging (cisa.gov) 402 points by auditnow 6 hours ago | 188 comments
18. The Great Repo Archival: GitHub purges all inactive repos >5 years to “reduce storage load” (github.blog) 388 points by codearcheologist 9 hours ago | 320 comments
19. Mastodon instances collapse under moderation load after EU’s Automated Speech Mandate (mastodon.social) 350 points by fedifragile 7 hours ago | 144 comments
20. NYC adopts automated congestion fines after human review team eliminated (nytimes.com) 332 points by finesallthewaydown 4 hours ago | 201 comments
21. Dropbox raises base plan to $49/month, cites “AI compute costs” (dropbox.com) 301 points by storageinflation 11 hours ago | 176 comments
22. Open-source maintainers strike after 3rd corporation claims their work as proprietary training data (github.com) 290 points by maintainerburnout 6 hours ago | 120 comments
23. FEMA: 2025–2035 wildfire season officially declared a “decade-long emergency” (fema.gov) 268 points by cookedagain 12 hours ago | 112 comments
24. Quantum ransomware group QShadow hits 11 banks using break-through key recovery exploit (krebsonsecurity.com) 250 points by qubitcrime 3 hours ago | 98 comments
25. Show HN: OfflineLAN – A mesh-network toolkit for neighborhoods preparing for rolling blackouts (offlinelan.net) 231 points by diynetworker 1 hour ago | 44 comments
Comment by hn_throwaway_99 6 days ago
Comment by dang 5 days ago
Comment by navane 5 days ago
Is this a push to override the meaning and erase the hallucination critique?
Comment by randomtoast 5 days ago
There are other terms that are similarly controversial, such as "thinking models". When you describe an LLM as "thinking", it often triggers debate because people interpret the term differently and bring their own expectations and assumptions into the discussion.
Comment by zwnow 5 days ago
Comment by block_dagger 5 days ago
Comment by adammarples 5 days ago
Comment by digbybk 5 days ago
Comment by zwnow 5 days ago
Edit: really does not surprise me that AI bros downvote this. Expecting to understand human values from people that want to make themselves obsolete was a mistake.
Comment by pseidemann 5 days ago
This is not correct. The DNA encodes learning mechanisms shaped by evolution. But there is no "Wikipedia" about the world in the DNA. The DNA is shaped by the process of evolution, and is not "filled" by seemingly random information.
Comment by zwnow 5 days ago
Im surprised as to how you got to that conclusion by my wording. I never claimed u have something like a knowledge base in ur DNA...
Comment by pseidemann 5 days ago
Comment by Timwi 5 days ago
Comment by wafflemaker 5 days ago
OTOH, brain tokens seem to be concept based and not always linguistic (many people think solely in images/concepts).
Comment by KalMann 5 days ago
That is an absolutely wild claim you've made. You're being way to presumptious.
Comment by ricardobeat 5 days ago
We don’t seem to have any clue if this is how our brain works, yet.
Comment by sankalpkotewar 5 days ago
Comment by throughway4728 5 days ago
Comment by pegador_ 5 days ago
Comment by BLACKCRAB 6 days ago
Comment by BLACKCRAB 6 days ago
Comment by rhtbxerghbu 6 days ago
Comment by LettriaBuilders 6 days ago
Comment by huflungdung 6 days ago
Comment by throwaway290 6 days ago
If you see HN frontpage a few years back there's almost no LLM related posts. Now there are multiple each day. An LLM didn't grasp that change, it can't creatively invent new trend or invent a larger picture of what changes in the world. So in 10 years it paints it still half posts about AI.
You can spend a few dozen prompts on getting it to do what you want but then you might as well just spend a couple hours writing that front page yourself, be more satisfied and the result would be funnier.
Comment by krupan 6 days ago
What to Submit
On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.
Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
Comment by minimaxir 6 days ago
Comment by KrempelKrakel 5 days ago
Comment by krupan 4 days ago
Comment by minimaxir 4 days ago
Satirizing Hacker News has been a thing since its inception.