Prism
Posted by meetpateltech 1 day ago
Comments
Comment by Perseids 16 hours ago
Comment by ZpJuUuNaQ5 13 hours ago
I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?
Comment by jll29 11 hours ago
You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.
Comment by ZpJuUuNaQ5 10 hours ago
[2] https://prism-pipeline.com/
[6] https://www.graphpad.com/features
[7] https://www.prismsoftware.com/
Comment by bicepjai 1 hour ago
Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.
Comment by vladms 10 hours ago
I am not sure you can make an argument of "other people are doing it too". Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).
As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.
Comment by adammarples 8 hours ago
Comment by jackphilson 3 hours ago
Comment by ConceptJunkie 2 hours ago
Comment by order-matters 7 hours ago
If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in
Comment by helsinkiandrew 10 hours ago
https://en.wikipedia.org/wiki/Prism_(optics)
I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.
Comment by 946789987649 11 hours ago
Comment by BlueTemplar 10 hours ago
(I expect a much higher than average share of people in academia also part of these spaces.)
Comment by andrewinardeer 4 hours ago
Comment by ConceptJunkie 2 hours ago
Comment by SoftTalker 4 hours ago
Comment by FrustratedMonky 6 hours ago
Most people don't even remember Snowden at this point.
Comment by black_puppydog 9 hours ago
They're of course free to choose this name. I'm just also surprised they would do so.
Comment by jimbokun 6 hours ago
Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.
Comment by mc32 7 hours ago
Comment by mayhemducks 4 hours ago
Comment by bergheim 9 hours ago
Altso, nazism. But different context, years ago, so whatever I guess?
Hell, let's just call it Hitler. Different context!
Given what they do it is an insidious name. Words matter.
Comment by fortyseven 6 hours ago
Comment by rvnx 5 hours ago
Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.
It was one of the biggest scandal in tech 10 years ago.
They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.
Comment by ZpJuUuNaQ5 7 hours ago
Comment by collingreen 7 hours ago
The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed / less niche example.
OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.
Hitler vegetarians isn't a great analogy because vegetarianism isn't related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called "Oilspill" or Dupont making a nail polish called "Forever Chem".
They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.
Comment by gegtik 7 hours ago
Comment by sunaookami 15 hours ago
Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).
Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098
Comment by jll29 11 hours ago
Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.
Comment by vladms 10 hours ago
For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).
Comment by sunaookami 1 hour ago
Comment by linkregister 6 hours ago
Protesting is a poor proxy for American political engagement.
Child neglect and missing children rates are lower than they were 50 years ago.
Comment by linkregister 6 hours ago
Comment by sunaookami 1 hour ago
Comment by TiredOfLife 14 hours ago
Comment by omnimus 14 hours ago
Comment by jll29 11 hours ago
And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.
Idealism is that you follow your principles despite that price, not escaping/evading the consequences.
Comment by BlueTemplar 9 hours ago
(And he is also the reason why Snowden ended up in Russia. Though it's possible that the flight plan they had was still the best one in that situation.)
Comment by Matl 9 hours ago
I am increasingly wondering what there remains of the supposed superiority of the Western system if we're willing to compromise on everything to suit our political ends.
The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.
In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we're better.
Comment by BlueTemplar 19 minutes ago
Comment by observationist 4 hours ago
It was Russia, or vanish into a black site, never to be seen or heard from again.
Comment by sunaookami 1 hour ago
Comment by lionkor 11 hours ago
Comment by vezycash 12 hours ago
Comment by TiredOfLife 10 hours ago
https://en.wikipedia.org/wiki/Lie#:~:text=citation%20needed%...
Comment by rvnx 5 hours ago
Comment by jimmydoe 12 hours ago
Comment by pageandrew 16 hours ago
Comment by Phelinofist 15 hours ago
Comment by addandsubtract 11 hours ago
Comment by wmeredith 8 hours ago
Comment by karmakurtisaani 14 hours ago
Comment by 3form 11 hours ago
Comment by kakacik 13 hours ago
Comment by vaylian 15 hours ago
Comment by ImHereToVote 16 hours ago
Comment by WiSaGaN 10 hours ago
[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...
Comment by JasonADrury 15 hours ago
Comment by Schlagbohrer 15 hours ago
Comment by concats 15 hours ago
Comment by cruffle_duffle 2 hours ago
Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.
Comment by isege 15 hours ago
Comment by teddyh 3 hours ago
Comment by wmeredith 8 hours ago
Comment by saidnooneever 13 hours ago
(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)
Comment by yayitswei 7 hours ago
Comment by bandrami 16 hours ago
Comment by observationist 4 hours ago
If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.
Comment by CalRobert 9 hours ago
Comment by johanyc 4 hours ago
Comment by LordDragonfang 4 hours ago
There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.
Comment by cruffle_duffle 2 hours ago
It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.
Comment by lrvick 10 hours ago
Comment by igleria 9 hours ago
Comment by chromanoid 12 hours ago
I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))
Comment by aa-jv 14 hours ago
Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.
The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.
Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.
Comment by alfiedotwtf 14 hours ago
We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill
Comment by alexpadula 12 hours ago
Comment by aargh_aargh 15 hours ago
Comment by vitalnodo 1 day ago
On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.
[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...
Comment by crazygringo 1 day ago
I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).
I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.
Comment by efficax 1 day ago
Comment by radioactivist 1 day ago
Comment by bhadass 1 day ago
a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).
overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.
also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.
Comment by joker666 5 hours ago
Comment by jdranczewski 1 day ago
Comment by crazygringo 1 day ago
The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.
(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)
Comment by gmac 15 hours ago
Comment by withinboredom 16 hours ago
Comment by baby 20 hours ago
Comment by MuteXR 12 hours ago
Comment by spacebuffer 23 hours ago
Comment by jll29 11 hours ago
You can even export ZIP files if you like (for any cloud service, it's not a bad idea to clone your repo once in a while to avoid begin stuck in case of unlikely downtime).
I have both a hosted instance (thanks to Overleaf/ShareLaTeX Ltd.) and I'm also paying user for the pro group license (>500€/year) for my research team. It's great - esp. for smaller research teams - to have the maintenance outsourced to a commercial provider.
On a good day, I'd spend 40% in Overleaf, 10% in Sublime/Emacs, 20% in Email and 10% in Google Scholar/Semantics Scholar and 10% in EasyChair/OpenReview, the rest in meetings.
Comment by universa1 16 hours ago
Comment by 3form 14 hours ago
Comment by warkdarrior 1 day ago
Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.
Comment by lou1306 12 hours ago
Then: The LaTeX distribution is always up-to-date; you can run it on limited resources; it has an endless supply of conference and journal templates (so you don't have to scavenge them yourself off a random conference/publisher website); Git backend means a) you can work offline and b) version control comes in for free. These just off the top of my head.
Comment by vicapow 1 day ago
Comment by seazoning 1 day ago
Any plans of having typst integrated anytime soon?
Comment by storystarling 10 hours ago
Comment by BlueTemplar 21 hours ago
To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?
Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?
Comment by jll29 11 hours ago
I have occasionally lost a paragraph just by accidental marking a few lines and pressing [Backspace].
But at the moment, there is no better option than Overleaf, and while I encourage you to write what you propose if you can, Overleaf will be the bar that any such system needs to be compared against.
Comment by BlueTemplar 9 hours ago
Comment by regenschutz 8 hours ago
[0]: https://typst.app
Comment by swyx 23 hours ago
Comment by vicapow 19 hours ago
Comment by songodongo 1 day ago
Comment by vitalnodo 1 day ago
They’re quite open about Prism being built on top of Crixet.
Comment by doctorpangloss 1 day ago
Comment by eloisant 9 hours ago
Also yes, LaTeX being source code it's much easier to get an AI to genere LaTeX than integrate into MS Word.
Comment by y1n0 20 hours ago
Comment by amitav1 1 day ago
Comment by nemomarx 23 hours ago
Comment by jasonfarnon 23 hours ago
Comment by jmdaly 21 hours ago
Comment by jll29 11 hours ago
I don't think any particular word alone can be used as an indicator for LLM use, although certain formatting cues are good signals (dashes, smileys, response structure).
We were offended, but kept quiet to get the article accepted, and we changed some instances of some words to appease them (which thankfully worked). But the wrong accusation left a bit of a bad aftertaste...
Comment by trentnelson 19 hours ago
Comment by MITSardine 22 hours ago
Comment by x-complexity 22 hours ago
...no?
Just one Google search for "latex editor" showed more than 2 in the first page.
It's not that different from using a markdown editor.
Comment by i2km 16 hours ago
Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...
Comment by thomasahle 12 hours ago
I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.
Comment by jltsiren 10 hours ago
Comment by aembleton 13 hours ago
Comment by haspok 15 hours ago
Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.
Comment by suddenlybananas 11 hours ago
Comment by port11 13 hours ago
Comment by butlike 8 hours ago
Comment by make3 16 hours ago
Comment by csomar 10 hours ago
Comment by boxed 14 hours ago
Comment by eternauta3k 10 hours ago
Comment by 4gotunameagain 15 hours ago
And you think the indians will not hand write the output of LLMs ?
Not that I have a better suggestion myself..
Comment by tarcon 13 hours ago
They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.
Comment by mFixman 13 hours ago
The target audience of this tool is not academics; it's OpenAI investors.
Comment by jtr1 6 hours ago
Comment by floitsch 10 hours ago
Comment by syntex 1 day ago
Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse
a little joke on Prism)
Comment by Springtime 20 hours ago
This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.
Comment by doodlesdev 19 hours ago
Comment by parentheses 17 hours ago
I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.
Comment by dormento 10 hours ago
Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(
Comment by syntex 14 hours ago
Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM
Comment by mrbonner 22 hours ago
"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."
The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."
Bullish on verification miners. Bearish on whatever remains of our collective attention span.
Comment by kinduff 21 hours ago
Comment by direwolf20 21 hours ago
Comment by JBorrow 1 day ago
I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.
Comment by SchemaLoad 1 day ago
I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.
Comment by cryzinger 1 day ago
> The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Comment by trees101 22 hours ago
Not actually contradictory. Verification is cheap when there's a spec to check against. 'Valid Sudoku?' is mechanical. But 'good paper?' has no spec. That's judgment, not verification.
Comment by degamad 18 hours ago
... for NP-hard problems.
It says nothing about the difficulty of finding or checking solutions of polynomial ("P") or exponential ("EXPTIME") problems.
Comment by bwfan123 21 hours ago
Comment by rspijker 15 hours ago
Comment by monkaiju 1 day ago
Comment by overfeed 1 day ago
I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.
It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.
Comment by fooker 22 hours ago
Comment by wmeredith 8 hours ago
I've seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.
Comment by Spivak 1 day ago
Comment by toomuchtodo 1 day ago
HN Search: curl AI slop - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Comment by Cornbilly 23 hours ago
If I submitted this, I'd have to punch myself in the face repeatedly.
Comment by toomuchtodo 23 hours ago
Comment by InsideOutSanta 1 day ago
Comment by willturman 1 day ago
Comment by SimianSci 1 day ago
I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.
Has anyone looked at reviving PageRank?
Comment by _kb 9 hours ago
Comment by Imustaskforhelp 1 day ago
I have heard from people here that Kagi can help remove slop from searches so I guess yeah.
Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.
So Kagi / DDG (Duckduckgo) yeah.
Comment by ectospheno 18 hours ago
Comment by jll29 1 day ago
DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?
Comment by selectodude 1 day ago
Comment by direwolf20 21 hours ago
DDG is Bing.
Comment by techblueberry 1 day ago
Comment by wmeredith 7 hours ago
Now that the code is cheaper (not free quite yet) skills further up the abstraction chain become more valuable.
Programming and design skills are less valuable. However, you still have to know what to build: product and UX skills are more valuable. You still have to know how to build it: software architect skills are more valuable.
Comment by jimbokun 6 hours ago
Very rarely is there anything about WHAT these agents are producing and why it's important and valuable.
Comment by SequoiaHope 1 day ago
Comment by jplusequalt 1 day ago
Comment by jcranmer 1 day ago
Comment by storystarling 1 day ago
Comment by direwolf20 21 hours ago
Comment by wmeredith 7 hours ago
Comment by lupire 22 hours ago
Comment by jll29 1 day ago
Comment by Spivak 1 day ago
No one, at all levels, wants to do notes.
Comment by golem14 21 hours ago
You could argue that not writing down everything provides a greater signal-noise ratio. Fair enough, but if something seemingly inconsequential is not noted and something is missed, that could worsen medical care.
I'm not sure how this affects malpractice claims - It's now easier to prove (with notes) that the doc "knew" about some detail that would otherwise not have been note down.
Comment by jll29 1 day ago
So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).
Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.
OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.
When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.
Comment by MITSardine 22 hours ago
Comment by Majromax 7 hours ago
Comment by lupire 22 hours ago
Comment by bloppe 1 day ago
Maybe you get reimbursed for half as long as there are no obvious hallucinations.
Comment by JBorrow 1 day ago
Comment by NewsaHackO 1 day ago
Comment by agnishom 22 hours ago
Comment by lupire 22 hours ago
Comment by methuselah_in 1 day ago
Comment by azan_ 7 hours ago
Comment by willturman 23 hours ago
In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.
Comment by Majromax 7 hours ago
That's not the right threat model. The existing peer review process is already weak to high-effort but conflicted research.
Instead, the threat model is closer one closer to that of spam, where the submitting authors don't care about the content of their submission at all but need X publications in high-impact outlets for their CV or grant application. Predatory journals exploit this as part of a pay-to-play problem, but the low reputation of those journals limits their desirable impact factor.
This threat model relies on frequent but low-quality submissions, and a submission fee would make taking multiple kicks at the can unviable.
Comment by bloppe 17 hours ago
Comment by s0rce 1 day ago
Comment by noitpmeder 1 day ago
Comment by antasvara 1 day ago
Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.
All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.
Comment by niek_pas 1 day ago
Comment by jll29 1 day ago
Comment by azan_ 7 hours ago
Comment by mathematicaster 1 day ago
Comment by throwaway85825 1 day ago
Comment by bloppe 1 day ago
Comment by eloisant 8 hours ago
Comment by olivia-banks 1 day ago
Comment by pixelready 1 day ago
Comment by mathematicaster 1 day ago
Comment by skissane 1 day ago
Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.
Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.
Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...
Comment by utilize1808 1 day ago
Comment by ezst 1 day ago
Comment by utilize1808 23 hours ago
Comment by direwolf20 21 hours ago
Comment by petcat 1 day ago
While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!
Comment by ezst 1 day ago
Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:
- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work
There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.
Incidentally, I think this may be a rare case where a blockchain makes some sense?
Comment by jll29 1 day ago
But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the "no double submission" rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.
But all the other employees should not be penalized by the violations of 3 researchers.
Comment by gus_massa 1 day ago
Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...
Comment by amitav1 1 day ago
(no snark)
Comment by Rperry2174 1 day ago
For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.
Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"
Comment by SchemaLoad 1 day ago
Comment by lonelyasacloud 9 hours ago
Or the providers of the models are capable of providing accepted/certified guarantees as to the quality of the output that their models and systems produce.
Comment by vitalnodo 1 day ago
In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]
Comment by pickleRick243 23 hours ago
"which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?
Comment by JBorrow 4 hours ago
I also don't understand your second paragraph at all.
Comment by agnishom 22 hours ago
That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.
Comment by pickleRick243 21 hours ago
To me, this is directly relevant to the issue of democratization of science. There seems to be a tool that is inconveniently resulting in the "wrong" people accelerating their output. That is essentially the complaint here rather than any criticism inherent to LLMs (e.g. water/resource usage, environmental impact, psychological/societal harm, etc.). The post I'm responding to could have been written if LLMs were replaced by any technology that resulted in less experienced or capable researchers disproportionately being able to submit to journals.
To be concrete, let's just take one of prism's capabilities- the ability to "turn whiteboard equations or diagrams directly into LaTeX". What a monstrous thing to give to the masses! Before, those uneducated cranks would send word docs to journals with poorly typeset equations, making it a trivial matter to filter them into the trash bin. Now, they can polish everything up and pass off their chicken scratch as respectable work. Ideally, we'd put up enough obstacles so that only those who should publish will publish.
Comment by varjag 6 hours ago
https://scottaaronson.blog/?p=304
By far the easiest quality signal is now out of the window.
Comment by agnishom 18 hours ago
My objection is not that they are the "wrong people". They are just regular people with excellent tools but not necessarily great scientific ideas.
Yes, it was easier to trash the crank's work before based on their unLaTeXed diagrams. Now, they might have a very professional looking diagram, but their work is still not great mathematics. Except that now the editor has a much harder time finding out who submitted a worthwhile paper
In what way do you think the feature of "LaTeXing a whiteboard diagram" is democritizing mathematics? I do not think there are many people who have exceptional mathematical insights but are not able to publish them because they are not able to typeset their work properly.
Comment by pickleRick243 16 hours ago
Being against this is essentially to be in favor of a form of discrimination by proxy- if you can't typeset, then likely you can't do research either. And wouldn't it be really annoying if those people who can't research could magically typeset. It's a fundamentally undemocratic impulse: Since those who cannot typeset well are unlikely to produce quality mathematics, we can (and should) use this as an effective barrier to entry. If you replace ability to typeset with a number of other traits, they would be rather controversial positions.
Comment by agnishom 12 hours ago
But LLMs are not really helping. With all the beautifully typeset papers with immaculate prose, Ramanujan's papers are going to be buried deeper!
To some extent, I agree with you that it is a "discrimination by proxy", especially with the typesetting example. But you could think of examples where cranks could very easily fool themselves into thinking that they understand the essence of the material without understanding the details. E.g, [I understand fluid dynamics very well. No, I don't need to work out the differential equations. AI can do the bean counting for me.]
Comment by Eridrus 19 hours ago
Comment by MITSardine 22 hours ago
Plenty of researchers hate writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.
I don't see an issue with generative writing in principle. The Devil is in the details, but I don't see this as much different from "hey grad student, write me this paper". And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.
I was initially a little indignated by the "find me some plausible refs and stick them in the paper" section of the video but, then again, isn't this what most people already do? Just copy-paste the background refs from the colleague's last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group & friends produced since then.
My experience is most likely skewed (as all are), but I haven't met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.
Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.
Comment by Otterly99 13 hours ago
I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.
In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those "lesser" areas like writing and presenting your work.
Overall, I think this has the potential to reshape the publication system, which is long overdue.
Comment by raphman 13 hours ago
A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).
Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims. Why won't ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?
Comment by jjcm 1 day ago
This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.
Comment by maxkfranz 1 day ago
On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.
Comment by ezst 1 day ago
Comment by maxkfranz 1 day ago
The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.
Comment by mrandish 1 day ago
> > who are looking to 'boost' their CV
Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.
Comment by egorfine 11 hours ago
For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.
Same with Microsoft: none of the AI stuff they are shoving down the users' throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.
Similar with Google although no one can deny real innovation happening there.
Comment by i000 21 hours ago
Comment by desolate_muffin 21 hours ago
Comment by i000 18 hours ago
Comment by BlueTemplar 9 hours ago
Comment by boplicity 1 day ago
Comment by currymj 1 day ago
the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.
Comment by BlueTemplar 9 hours ago
why would it be upon them to submit in English, when instead reviewers and readers can themselves use a LLM translator to read the paper ?
Comment by jasonfarnon 23 hours ago
Comment by jascha_eng 1 day ago
These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.
Comment by direwolf20 21 hours ago
Comment by eloisant 9 hours ago
It was already a problem 25 years ago when I did my Ph.D., and I don't think things changed that much since then.
This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.
I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.
Anyway - research slop started way before AI. It's probably going to make the problem worse, but the root issue have been there for a long time.
Comment by parentheses 17 hours ago
Comment by keithnz 1 day ago
Comment by usefulposter 1 day ago
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
Comment by lupsasca 1 day ago
Comment by fuzzfactor 8 hours ago
If I can't have that, the next best thing is a helper while I'm at the keyboard my damn self.
>Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting
This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970's, and a more comprehensive number of scientists once again have their own secretary.
Looks like the engineers at Crixet were tired of waiting.
Comment by CJefferson 1 day ago
Comment by lupsasca 1 day ago
Comment by nestes 1 day ago
If you're not a Zotero user, I can't recommend it enough.
Comment by MITSardine 22 hours ago
Comment by noitpmeder 1 day ago
Comment by SecretDreams 1 day ago
This is a space that probably needs substantial reform, much like grad school models in general (IMO).
Comment by parentheses 17 hours ago
Comment by roflmaostc 8 hours ago
So yes, you use it to write the paper but soon it is public knowledge anyway.
I am not sure if there is much to learn from the draft of the authors.
Comment by GorbachevyChase 2 hours ago
Comment by biscuit1v9 11 hours ago
Comment by z3t4 14 hours ago
Comment by raincole 22 hours ago
I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.
Comment by direwolf20 21 hours ago
Comment by kccqzy 21 hours ago
Comment by biophysboy 21 hours ago
Comment by andy12_ 11 hours ago
Comment by mbreese 20 hours ago
Comment by fuzzfactor 8 hours ago
Comment by disconcision 20 hours ago
Comment by ivirshup 21 hours ago
[1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...
Comment by doodlesdev 19 hours ago
Comment by lionkor 11 hours ago
Comment by fuzzfactor 8 hours ago
This could be considered in degrees.
Like when you only need a single table from another researcher's 25-page publication, you would cite it to be thorough but it wouldn't be so bad if you didn't even read very much of their other text. Perhaps not any at all.
Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?
Comment by SilverBirch 11 hours ago
Comment by BlueTemplar 24 minutes ago
Comment by fuzzfactor 8 hours ago
>slop papers will start to outcompete the real research papers.
This started to rear its ugly head when electric typewriters got more affordable.
Sometimes all it takes is faster horses and you're off to the races :\
Comment by utopiah 17 hours ago
Comment by asveikau 1 day ago
Comment by pazimzadeh 1 day ago
Comment by varjag 1 day ago
Comment by DonaldPShimoda 23 hours ago
"Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.
Comment by Fnoord 22 hours ago
> Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones ("PJ"), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.
> Its name derives from "grok", roughly meaning "to understand completely", which had previously entered geek slang.
Comment by varjag 11 hours ago
Comment by milleramp 21 hours ago
Comment by sincerely 21 hours ago
Comment by intothemild 1 day ago
Comment by XCSme 21 hours ago
Comment by bmaranville 1 day ago
I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.
Comment by matteocantiello 3 hours ago
Keeping LaTeX as the language is a feature, not a bug: it filters out noise and selects for people trained in STEM, who’ve already learned how to think and work scientifically.
Comment by plastic041 23 hours ago
Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.
Comment by parsimo2010 23 hours ago
To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.
Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).
Comment by plastic041 23 hours ago
Didn't know that there's difference between bibliography and cited work. thank you.
Comment by suddenlybananas 11 hours ago
Comment by alphazard 23 hours ago
Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms. Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).
The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.
Comment by iterance 16 hours ago
While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.
Comment by bonsai_spool 23 hours ago
Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in 'established' logic).
An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?
Comment by tyre 16 hours ago
Comment by _bohm 21 hours ago
Comment by bogdan 17 hours ago
Comment by fxwin 15 hours ago
an appeal to authority is fallacious when the authority is unqualified for the subject at hand. Citing a paper from a philosopher to support a point isn't fallacious, but "<philosophical statement> because my biology professor said so" is.
Comment by danelski 23 hours ago
Comment by rockskon 21 hours ago
Comment by razster 19 hours ago
Comment by DominikPeters 1 day ago
Comment by qbit42 1 day ago
I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.
Comment by mturmon 1 day ago
@vicapow replied to keep the Dropbox parallel alive
Comment by DominikPeters 14 hours ago
Comment by vicapow 1 day ago
You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.
Comment by yfontana 15 hours ago
I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.
Comment by jstummbillig 1 day ago
Comment by beklein 1 day ago
Comment by swyx 23 hours ago
generally think that there's a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don't traditionally think of themselves as "in science" to cross over into the "ai for hard STEM" because it turns out that 1) they need you 2) you can fill in what you don't know 3) it will be impactful/challenging/rewarding 4) we've exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting/change-history-forever hard problems are basically at this frontier
*conscious exaggeration sorry
Comment by beklein 14 hours ago
Love the idea of a dedicated series/pod where normal people take on hard problems by using and leveraging the emergent capabilities of frontier AI systems.
Anyway, thanks for pod!
Comment by vicapow 1 day ago
Comment by tyteen4a03 9 hours ago
The solution is currently quite focused on life science needs but if you're curious, check us out!
Comment by PrismerAI 7 hours ago
Comment by drakenot 6 hours ago
I converted my resume to LaTeX with Claude Code recently. Being able to iterate on this code-form of my document is so much nicer than fighting the formatting with in Word/Google Docs.
I dropped my .tex file into Prism and it makes it nice to instantly render it.
Comment by jumploops 1 day ago
The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.
After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].
I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.
[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.
[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.
Comment by anon1253 15 hours ago
[1] https://gist.github.com/joelkuiper/d52cc0e5ff06d12c85e492e42...
Comment by maest 23 hours ago
> Prism is a free workspace for scientific writing and collaboration
Comment by falcor84 1 day ago
Comment by Ronsenshi 21 hours ago
Comment by sva_ 1 day ago
I can't wait
Comment by jeffybefffy519 1 day ago
Comment by cauliflower2718 22 hours ago
Comment by Jhater 21 hours ago
Comment by vitalnodo 1 day ago
Comment by vessenes 1 day ago
Comment by olivia-banks 1 day ago
Comment by vessenes 1 day ago
Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.
Comment by noitpmeder 1 day ago
Ok! Here's <more slop>
Comment by olivia-banks 1 day ago
Comment by NateEag 22 hours ago
Comment by markbao 1 day ago
Comment by crazygringo 1 day ago
Typst feels more like the future: https://typst.app/
The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.
Comment by lmc 12 hours ago
Comment by maxkfranz 1 day ago
The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.
Comment by probably_wrong 22 hours ago
And then I need an extra tool for dealing with bibliography, change history is unpredictable (and, IMO, vastly inferior to version control), and everything gets even worse if I open said Word file in LibreOffice.
LaTeX' syntax may be hard, but Word actively fights me during writing.
[1] Moving a photo in Microsoft Word - https://www.instagram.com/jessandquinn/reel/DIMkKkqODS5/
Comment by auxym 1 day ago
I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/
Comment by hatmatrix 22 hours ago
It is an old language though. LaTeX is the macro system on top of TeX, but now you can write markdown or org-mode (or orgdown) and generate LaTeX -> PDF via pandoc/org-mode. Maybe this is the level of abstraction we should be targeting. Though currently, you still need to drop into LaTeX for very specific fine-tuning.
Comment by bonsai_spool 22 hours ago
It's concerning that this wasn't identified and augur poorly for their search capabilities.
Comment by sbszllr 1 day ago
Comment by einpoklum 15 hours ago
The collect chat records for any number of users, not the least of which being NSA surveillance and analysis - highly likely given what we know from the Snowden leaks.
Comment by reassess_blind 1 day ago
Comment by torginus 1 day ago
Comment by jedberg 22 hours ago
AIs use em dashes because competent writers have been using em dashes for a long time. I really hate the fact that we assume em dash == AI written. I've had to stop using em dashes because of it.
Comment by noname120 20 hours ago
Comment by flumpcakes 1 day ago
Comment by reed1234 1 day ago
Comment by exyi 1 day ago
Comment by mfld 11 hours ago
Comment by sn0wr8ven 11 hours ago
A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn't take any setup. People do this with or without this tool almost identically.
Comment by hdivider 10 hours ago
The reason? I can give you the full source for Sam Altman:
while(alive) { RaiseCapital() }
That is the full extent of Altman. :)
Comment by WolfOliver 1 day ago
It also offers LaTeX workspaces
see video: https://www.youtube.com/watch?v=feWZByHoViw
Comment by MattDaEskimo 1 day ago
There was an idea of OpenAI charging commission or royalties on new discoveries.
What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?
Comment by engineer_22 1 day ago
Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.
Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.
Comment by uwehn 23 hours ago
Comment by epolanski 1 day ago
Like, what's the point?
You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.
As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.
Let me explain how scientific publishing works to people out of the loop:
- science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.
2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).
3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.
4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.
Comment by SoKamil 23 hours ago
The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.
Comment by epolanski 23 hours ago
All those people can go work for private companies, but few as scientists rather than technicians or QAs.
Comment by bonsai_spool 22 hours ago
Hmm, I follow the argument, but it's inconsistent with your assertion that there is going to be incentive for 'proper work' over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about 'asian researchers' ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).
Comment by epolanski 13 hours ago
South Korea and China produce huge amounts non reproducible experiments.
Comment by AuthAuth 1 day ago
Comment by unicodeveloper 9 hours ago
Maybe OpenAI should acquire Valyu too. They allow you deepresearch on academic papers.
Comment by smuenkel 5 hours ago
Comment by arnejenssen 14 hours ago
"There is no value added without sweating"
Comment by lionkor 11 hours ago
Comment by radioactivist 1 day ago
Comment by lxe 1 day ago
EDIT: Fixed :)
Comment by radioactivist 23 hours ago
Comment by melagonster 20 hours ago
Comment by flockonus 1 day ago
EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)
Comment by mkl 14 hours ago
Comment by bitpush 1 day ago
Comment by wetpaws 1 day ago
Comment by estebarb 15 hours ago
Comment by r_thambapillai 6 hours ago
Comment by butlike 8 hours ago
Great, so now I'll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?
Comment by azan_ 7 hours ago
Comment by nxobject 1 day ago
FWIW, Google Scholar has a fairly compelling natural-language search tool, too.
Comment by jonas_kgomo 21 hours ago
Comment by jf___ 16 hours ago
Comment by ozgung 14 hours ago
Comment by ILoveHorses 15 hours ago
Comment by pmbanugo 10 hours ago
Comment by homerowilson 21 hours ago
% !TEX program = lualatex
to the top of your document allows you to switch LaTeX engine. This is required for recent accessibility standards compliance (support for tagging and \DocumentMetadata). Compilation takes a bit longer though, but works fine, unlike with Overleaf where using the lualatex engine does not work in the free version.
Comment by tzahifadida 8 hours ago
Comment by khalic 1 day ago
Comment by vicapow 1 day ago
Comment by chairhairair 1 day ago
Even if yall don’t train off it he’ll find some other way.
“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”
https://www.businessinsider.com/openai-cfo-sarah-friar-futur...
Comment by danelski 23 hours ago
Comment by Myrmornis 18 hours ago
Comment by plutomeetsyou 18 hours ago
Comment by CobrastanJorji 1 day ago
"Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."
"Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."
"I dunno, does anybody want that?"
"Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."
"Yeah, I guess you're right, let's do your scientific paper generation thing."
Comment by random_duck 5 hours ago
Comment by bariswheel 22 hours ago
Comment by addedlovely 5 hours ago
Slop science papers is just what the world needs.
Comment by ggm 22 hours ago
Comment by flumpcakes 1 day ago
I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).
All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.
Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.
Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.
We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).
I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.
Comment by jimmar 1 day ago
I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.
Comment by flumpcakes 1 day ago
Comment by f2fff 19 hours ago
Lessons are learned the hard way. I invite the slop - the more the merrier. It will lead to a reduction in internet activity as people puke from the slop. And then we chart our way back to the right path.
It is what it is. Humans.
Comment by PlatoIsADisease 1 day ago
Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.
Comment by slashdave 5 hours ago
The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.
Comment by unixzii 16 hours ago
Comment by mves 15 hours ago
A couple of generations of students later, and these will be rare skills: information finding, actual thinking, and conveying complex information in writing.
Comment by Onavo 1 day ago
Lots of players in this space.
Comment by zmmmmm 22 hours ago
I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.
Comment by asadm 1 day ago
Comment by dash2 13 hours ago
Oh NO. We will be stuck in LaTeX hell forever.
Comment by noahbp 1 day ago
Comment by drusepth 22 hours ago
Comment by zerocrates 16 hours ago
Apparently on Macs it's usually Command-Shift-Z?
Comment by legitster 1 day ago
I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.
All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.
Comment by Otterly99 11 hours ago
In my lab, we have been struggling with automated image segmentation for years. 3 years ago, I started learning ML and the task is pretty standard, so there are a lot of solution.
In 3 months, I managed to get a working solution, which only took a lot of sweat annotating images first.
I think this is where tools like OpenCode really shine, because they unlock the potential for any user to generate a solution to their specific problem.
Comment by falcor84 1 day ago
Comment by ai_critic 1 day ago
This is all pageantry.
Comment by sfink 1 day ago
"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."
Comment by renyicircle 1 day ago
Comment by pfisherman 1 day ago
Comment by olivia-banks 1 day ago
Comment by olivia-banks 1 day ago
We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.
Comment by NewsaHackO 1 day ago
Comment by black_puppydog 1 day ago
Comment by verdverm 1 day ago
Comment by adverbly 1 day ago
Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.
Comment by maxkfranz 1 day ago
E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”
Comment by teaearlgraycold 1 day ago
Comment by thesuitonym 1 day ago
Comment by chaosprint 1 day ago
Comment by andrepd 1 day ago
Comment by delduca 1 day ago
Comment by drusepth 22 hours ago
Comment by 0dayman 1 day ago
Comment by falcor84 1 day ago
Comment by drusepth 22 hours ago
Comment by falcor84 20 hours ago
Comment by hulitu 1 day ago
I thought this was introduced by the NSA some time ago.
Comment by webdoodle 20 hours ago
Fuck A.I. and the collaborators creating it. They've sold out the human race.
Comment by postatic 20 hours ago
Comment by oytmeal 1 day ago
Comment by falcor84 1 day ago
Comment by Min0taurr 6 hours ago
Comment by wasmainiac 1 day ago
Comment by falcor84 1 day ago
At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?
Comment by wasmainiac 12 hours ago
Comment by AlexCoventry 1 day ago
Comment by mkl 14 hours ago
What a bizarre thing to say! I'm guessing it's slop. Makes it hard to trust anything the article claims.
Comment by BizarroLand 23 hours ago
In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world's first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.
Comment by zb3 1 day ago
Comment by rcastellotti 12 hours ago
Comment by pigeons 1 day ago
Comment by egorfine 11 hours ago
> Draft and revise papers with the full document as context
> ...
And pay the finder's fee on every discovery worth pursuing.
Yeah, immediately fuck that.
Comment by preommr 1 day ago
Was this not already possible in the web ui or through a vscode-like editor?
Comment by vicapow 1 day ago
Comment by divan 1 day ago
Comment by i2km 17 hours ago
Comment by camillomiller 21 hours ago
Comment by random_duck 5 hours ago
Comment by jackblemming 23 hours ago
Comment by fuzzfactor 6 hours ago
A good salesman could make money off of people who can do this, even if this is free they can always pull more than their weight with other efforts, and that can be in a more natually lucrative niche.
Comment by soulofmischief 1 day ago
Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.
Comment by AndrewKemendo 1 day ago
As other top level posters have indicated the review portion of this is the limiting factor
unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.
So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.
I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.
If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market
Comment by f2fff 19 hours ago
Errr sure. Sounds easy when you write it down. I highly doubt such a thing will ever exist.
Comment by AndrewKemendo 8 hours ago
Comment by idontknowmuch 22 hours ago
LLMs are undeniably great for interactive discussion with content IF you actually are up-to-date with the historical context of a field, the current "state-of-the-art", and have, at least, a subjective opinion on the likely trajectories for future experimentation and innovation.
But, agents, at best, will just regurgitate ideas and experiments that have already been performed (by sampling from a model trained on most existing research literature), and, at worst, inundate the literature with slop that lacks relevant context, and, as a negative to LLMs, pollute future training data. As of now, I am leaning towards "worst" case.
And, just to help with the facts, your last comment is unfortunately quite inaccurate. Science is one of the best government investments. For every $1.00 dollar given to the NIH in the US, $2.56 of economic activity is estimated to be generated. Plus, science isn't merely a public venture. The large tech labs have huge R&D because the output from research can lead to exponential returns on investment.
Comment by f2fff 19 hours ago
I would wager hes not - he seems to post with a lot of bluster and links to some paper he wrote (that nobody cares about).
Comment by hit8run 1 day ago
Comment by kasane_teto 5 hours ago
Comment by lispisok 1 day ago
Comment by jsrozner 1 day ago
(re the decline of scientific integrity / signal-to-noise ratio in science)
Comment by shevy-java 1 day ago
Uhm ... no.
I think we need to put an end to AI as it is currently used (not all of it but most of it).
Comment by drusepth 1 day ago
Comment by Jaxan 1 day ago
Comment by f2fff 19 hours ago
We dont need more stuff - we need more quality and less of the shit stuff.
Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.
Comment by geekamongus 21 hours ago
Comment by mves 16 hours ago
Comment by lsh0 20 hours ago
Comment by hahahahhaah 1 day ago
Comment by lifetimerubyist 23 hours ago
Comment by postalcoder 1 day ago
Comment by cheeseomlit 1 day ago
Comment by hedora 1 day ago
(See also: today’s WhatsApp whistleblower lawsuit.)
Comment by giancarlostoro 1 day ago
Comment by blitzar 1 day ago
Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.
Comment by arthurcolle 1 day ago
Comment by vjk800 1 day ago
Comment by seanhunter 1 day ago
Comment by no-dr-onboard 1 day ago
Comment by seanhunter 16 hours ago
Comment by kaonwarb 1 day ago
Comment by maqp 1 day ago
Comment by willturman 1 day ago
Comment by dylan604 1 day ago
Comment by songodongo 1 day ago
Comment by moralestapia 1 day ago
Comment by locusofself 1 day ago
Comment by wilg 1 day ago
Comment by maximgeorge 1 day ago
Comment by BLACKCRAB 8 hours ago
Comment by verdverm 1 day ago
Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?