Prism

Posted by meetpateltech 1 day ago

Counter757Comment512OpenOriginal

Comments

Comment by Perseids 16 hours ago

I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out [1]. Has the technical and scientific community in the US already forgotten this huge breach of trust? This is especially jarring at a time where the US is burning its political good-will at unprecedented rate (at least unprecedented during the life-times of most of us) and talking about digital sovereignty has become mainstream in Europe. As a company trying to promote a product, I would stay as far away from that memory as possible, at least if you care about international markets.

[1] https://news.ycombinator.com/item?id=46787165

Comment by ZpJuUuNaQ5 13 hours ago

>I'm dumbfounded they chose the name of the infamous NSA mass surveillance program revealed by Snowden in 2013. And even more so that there is just one other comment among 320 pointing this out

I just think it's silly to obsess over words like that. There are many words that take on different meanings in different contexts and can be associated with different events, ideas, products, time periods, etc. Would you feel better if they named it "Polyhedron"?

Comment by jll29 11 hours ago

What the OP was talking about is the negative connotation that goes with the word; it's certainly a poor choice from a marketing point of view.

You may say it's "silly to obsess", but it's like naming a product "Auschwitz" and saying "it's just a city name" -- it ignores the power of what Geffrey N. Leech called "associative meaning" in his taxonomy of "Seven Types of Meaning" (Semantics, 2nd. ed. 1989): speaking that city's name evokes images of piles of corpses of gassed undernourished human beings, walls of gas chambers with fingernail scratches and lamp shades made of human skin.

Comment by ZpJuUuNaQ5 10 hours ago

Well, I don't know anything about marketing and you might have a point, but the severity of impact of these two words is clearly very different, so it doesn't look like a good comparison to me. It would raise quite a few eyebrows and more if, for example, someone released a Linux distro named "Auschwitz OS", meanwhile, even in the software world, there are multiple products that incorporate the word prism in various ways[1][2][3][4][5][6][7][8][9]. I don't believe that an average user encountering the word "prism" immediately starts thinking about NSA surveillance program.

[1] https://www.prisma.io/

[2] https://prism-pipeline.com/

[3] https://prismppm.com/

[4] https://prismlibrary.com/

[5] https://3dprism.eu/en/

[6] https://www.graphpad.com/features

[7] https://www.prismsoftware.com/

[8] https://prismlive.com/en_us/

[9] https://github.com/Project-Prism/Prism-OS

Comment by bicepjai 1 hour ago

When you’re as high profile as OpenAI, you don’t get judged like everyone else. People scrutinize your choices reflexively, and that’s just the tax of being a famous brand: it amplifies both the upsides and the blowback.

Most ordinary users won’t recognize the smaller products you listed, but they will recognize OpenAI and they’ll recognize Snowden/NSA adjacent references because those have seeped into mainstream culture. And even if the average user doesn’t immediately make the connection, someone in their orbit on social media almost certainly will and they’ll happily spin it into a theory for engagement.

Comment by vladms 10 hours ago

I think the ideas was to try to explain why is a problem to choose something, it is not a comparison of the intensity / importance.

I am not sure you can make an argument of "other people are doing it too". Lots of people do things that it is not in their interest (ex: smoking, to pick the easy one).

As others mentioned, I did not have the negative connotation related to the word prism either, but not sure how could one check that anyhow. It is not like I was not surprised these years about what some other people think, so who knows... Maybe someone with experience in marketing could explain how it is done.

Comment by adammarples 8 hours ago

But without the extremity of the Auschwitz example, it suddenly is not a problem. Prism is an unbelievably generic word and I had not even heard of the Snowdon one until now nor would I remember it if I had. Prism is one step away from "Triangle" in terms of how generic it is.

Comment by jackphilson 3 hours ago

Triangle kind of reminds me of the Bermuda Triangle. You know how many people died there?

Comment by ConceptJunkie 2 hours ago

People? Do you know how many of them are murderers, fraudsters and all around finks. That's a terrible thing to mention.

Comment by order-matters 7 hours ago

1 more perspective to add: while i did not know the NSA program was called prism, it did give me pause to find out in this thread. OpenAI surely knows what it was called, at least they should. So it begs the question of why.

If they claim in a private meeting with people at the NSA that they did it as a tribute to them and a bid for partnership, who would anyone here be to say they didnt? even if they didnt... which is only relevant because OpenAI processes an absolute shitton of data the NSA would be interested in

Comment by helsinkiandrew 10 hours ago

And of course The prism

https://en.wikipedia.org/wiki/Prism_(optics)

I remember the NSA Prism program, but hearing prism today I would think first of Newton, optics, and rainbows.

Comment by 946789987649 11 hours ago

Do a lot of people know that Prism is the name of the program? I certainly didn't and consider myself fairly switched on in general

Comment by BlueTemplar 10 hours ago

It's likely to be an age thing too. Were you in hacker-related spaces when the Snowden scandal happened ?

(I expect a much higher than average share of people in academia also part of these spaces.)

Comment by andrewinardeer 4 hours ago

We had a local child day care provider call themselves ISIS. That was blast.

Comment by ConceptJunkie 2 hours ago

There was a TV show called "The Mighty Isis" in the 70s. What were they thinking?! (Well, with Joanna Cameron around, I wouldn't be able to think too clearly either.)

Comment by SoftTalker 4 hours ago

We had a local siding company call themselves "The Vinyl Solution" some people are just tone-deaf.

Comment by FrustratedMonky 6 hours ago

I think point is that on the sliding scale of words that are no longer allowed to use, "Prism" does not reach the level of "Auschwitz".

Most people don't even remember Snowden at this point.

Comment by black_puppydog 9 hours ago

I have to say I had the same reaction. Sure, "prism" shows up in many contexts. But here it shows up in the context of a company and product that is already constantly in the news for its lackluster regard for other people's expectation of privacy, copyright, and generally trying to "collect it all" as it were, and that, as GP mentioned, in an international context that doesn't put these efforts in the best light.

They're of course free to choose this name. I'm just also surprised they would do so.

Comment by jimbokun 6 hours ago

But the contexts are closely related.

Large scale technology projects that people are suspicious and anxious about. There are a lot of people anxious that AI will be used for mass surveillance by governments. So you pick a name of another project that was used for mass surveillance by government.

Comment by mc32 7 hours ago

Plus there are lots of “legacy” products with the name prism in them. I also don’t think the public makes the connection. It’s mainly people who care to be aware of government overreach who think it’s a bad word association.

Comment by mayhemducks 4 hours ago

You do realize that obsessing over words like that is a pretty major part of what programming and computer science is right? Linguistics is highly intertwined with computer science.

Comment by bergheim 9 hours ago

Sure. Like Goebbels. Because they gobble things up.

Altso, nazism. But different context, years ago, so whatever I guess?

Hell, let's just call it Hitler. Different context!

Given what they do it is an insidious name. Words matter.

Comment by fortyseven 6 hours ago

Comparing words with unique widespread notoriety with a simple, everyday one. Try again.

Comment by rvnx 5 hours ago

Prism in tech is very well-known to be a surveillance program.

Coming from a company involved with sharing data to intelligence services (it's the law you can't escape it) this is not wise at all. Unless nobody in OpenAI heard of it.

It was one of the biggest scandal in tech 10 years ago.

They could call it "Workspace". More clear, more useful, no need to use a code-word, that would have been fine for internal use.

Comment by ZpJuUuNaQ5 7 hours ago

So you have to resort to the most extreme examples in order to make it a problem? Do you also think of Hitler when you encounter a word "vegetarian"?

Comment by collingreen 7 hours ago

Is that what you think hitler was very famous for?

The extreme examples are an analogy that highlight the shape of the comparison with a more generally loathed / less niche example.

OpenAI is a thing with lots and lots of personal data that the consumers trust OpenAI not to abuse or lose. They chose a product name that matches a us government program that secretly and illegal breached exactly that kind of trust.

Hitler vegetarians isn't a great analogy because vegetarianism isn't related to what made hitler bad. Something closer might be Exxon or BP making a hairgel called "Oilspill" or Dupont making a nail polish called "Forever Chem".

They could have chosen anything but they chose one specifically matching a recent data stealing and abuse scandal.

Comment by gegtik 7 hours ago

huh.. seems like a head-scratcher why it would relevant to this argument to select objectionable words instead of benign, inert words.

Comment by sunaookami 15 hours ago

>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Have you ever seen the comment section of a Snowden thread here? A lot of users here call for Snowden to be jailed, call him a russian asset, play down the reports etc. These are either NSA sock puppet accounts or they won't bite the hand that feeds them (employees of companies willing to breach their users trust).

Edit: see my comment here in a snowden thread: https://news.ycombinator.com/item?id=46237098

Comment by jll29 11 hours ago

What Snowden did was heroic. What was shameful was the world's underwhelming reaction. Where were all these images in the media of protest marches like against the Vietnam war?

Someone once said "Religion is opium for the people." - today, give people a mobile device and some doom-scrolling social media celebrity nonsense app, and they wouldn't noticed if their own children didn't come home from school.

Comment by vladms 10 hours ago

Looking back I think allowing more centralized control to various forms of media to private parties did much worse overall than government surveillance on the long run.

For me the problem was not surveillance, the problem is addiction focused app building (+ the monopoly), and that never seem to be a secret. Only now there are some attempts to do something (like Australia and France banning children - which am not sure is feasible or efficient but at least is more than zero).

Comment by sunaookami 1 hour ago

Remember when people and tech companies protested against SOPA and PIPA? Remember the SOPA blackout day? Today even worse laws are passed with cheers from the HN crowd such as the OSA. Embarassing.

Comment by linkregister 6 hours ago

Protests in 2025 alone have outnumbered that of those during the Vietnam War.

Protesting is a poor proxy for American political engagement.

Child neglect and missing children rates are lower than they were 50 years ago.

Comment by linkregister 6 hours ago

Are you asserting that disagrees with you is either a propaganda campaign or a cynical insider? Nobody who opposes you has a truly held belief?

Comment by sunaookami 1 hour ago

So you hate waffles?

Comment by TiredOfLife 14 hours ago

Him being (or best case becoming) a russian asset turned out to be true

Comment by omnimus 14 hours ago

Like it would matter for any of the revelations. And like he would have other choices to not go to prison. Look at how it worked out for Assange.

Comment by jll29 11 hours ago

They both undertook something they believed in, and showed extreme courage.

And they did manage to get the word out. They are both relatively free now, but it is true, they both paid a price.

Idealism is that you follow your principles despite that price, not escaping/evading the consequences.

Comment by BlueTemplar 9 hours ago

Assange became a Russian asset *while* in a whistleblowing-related job.

(And he is also the reason why Snowden ended up in Russia. Though it's possible that the flight plan they had was still the best one in that situation.)

Comment by Matl 9 hours ago

So exposing corruption of Western governments is not worthwhile because it 'helps' Russia? Aha, got it.

I am increasingly wondering what there remains of the supposed superiority of the Western system if we're willing to compromise on everything to suit our political ends.

The point was supposed to be that the truth is worth having out there for the purpose of having an informed public, no matter how it was (potentially) obtained.

In the end, we may end up with everything we fear about China but worse infrastructure and still somehow think we're better.

Comment by BlueTemplar 19 minutes ago

No, exposing Western corruption is all well and good, but the problem is that at some point Assange seems to have decided "the enemy of my enemy is my friend", which was a very bad idea when applied to Putin's Russia.

Comment by observationist 4 hours ago

Obama and Biden chased him into a corner. They actually bragged about chasing him into Russia, because it was a convenient narrative to smear Snowden with after the fact.

It was Russia, or vanish into a black site, never to be seen or heard from again.

Comment by sunaookami 1 hour ago

In what way did it "turn out to be true"? Because he has russian citizenship and is living in a country that is not allied with his home country that is/was actively trying to kill him (and revoked his US passport)?

Comment by lionkor 11 hours ago

If the messenger has anything to do with Russia, even after the fact, we should dismiss the message and remember to never look up.

Comment by vezycash 12 hours ago

Truth is truth, no matter the source.

Comment by TiredOfLife 10 hours ago

Comment by rvnx 5 hours ago

There is also the truth that you say, and the truth that you feel

Comment by jimmydoe 12 hours ago

He could have been a Chinese asset, but CCP is a coward.

Comment by pageandrew 16 hours ago

These things don't really seem related at all. Its a pretty generic term.

Comment by Phelinofist 15 hours ago

FWIW, my immediate reaction was the same "That reminds me of NSA PRISM"

Comment by addandsubtract 11 hours ago

It reminded me of the code highlighter[0], and the ORM Prisma[1].

[0] https://prismjs.com/

[1] https://www.prisma.io/

Comment by wmeredith 8 hours ago

It reminded me of the album cover to Dark Side of The Moon by Pink Floyd.

Comment by karmakurtisaani 14 hours ago

Same here.

Comment by 3form 11 hours ago

Same, to the point where I was wondering if someone deliberately named it so. But I expect that whoever made this decision simply doesn't know or care.

Comment by kakacik 13 hours ago

I came here based to headline expecting some more cia & nsa shit, that word is tarnished for few decades in better part of IT community (that actually cares about this craft beyond paycheck)

Comment by vaylian 15 hours ago

And yet, the name immediately reminded me of the Snowden relevations.

Comment by ImHereToVote 16 hours ago

They are farming scientists for insight.

Comment by WiSaGaN 10 hours ago

OpenAI has a former NSA director on its board. [1] This connection makes the dilution of the term "PRISM" in search results a potential benefit to NSA interests.

[1]: https://openai.com/index/openai-appoints-retired-us-army-gen...

Comment by JasonADrury 15 hours ago

This comment might make more sense if there was some connection or similarity between the OpenAI "Prism" product and the NSA surveillance program. There doesn't appear to be.

Comment by Schlagbohrer 15 hours ago

Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap.

Comment by concats 15 hours ago

Isn't most research and scientific data is already shared openly (in publications usually)?

Comment by cruffle_duffle 2 hours ago

"Except that this lets OpenAI gain research data and scientific ideas by stealing from their users, using their huge mass surveillance platform. So, tremendous overlap."

Even if what you say is completely untrue (and who really knows for sure).... it creates that mental association. It's a horrible product name.

Comment by isege 15 hours ago

This comment allows ycombinator to steal ideas from their user's comments, using their huge mass news platform. Temendous overlap indeed.

Comment by teddyh 3 hours ago

We used to have “SEO spam”, where people would try to create news (and other) articles associated with some word or concept to drown out some scandal associated with that same word or concept. The idea was that people searching on Google for the word would see only the newly created articles, and not see anything scandalous. This could be something similar, but aimed at future LLM’s trained on these articles. If LLM’s learn that the word “Prism” means a certain new thing in a surveillance context, the LLM’s will unlearn the older association, thereby hiding the Snowden revelations.

Comment by wmeredith 8 hours ago

I get what you're saying, but that was 13 years ago. How long before the branding statute of limitations runs out on usage for a simple noun?

Comment by saidnooneever 13 hours ago

tons of things are called prism.

(full disclosure, yes they will be handin in PII on demands like the same kinda deals, this is 'normal' - 2012 shows us no one gives a shit)

Comment by yayitswei 7 hours ago

Fwiw I was going to make the same comment about the naming, but you beat me to it.

Comment by bandrami 16 hours ago

I mean it's also the name of the national engineering education journal and a few other things. There's only 14,000 5-letter words in English so you're going to have collisions.

Comment by observationist 4 hours ago

I think it's probably just apparent to a small set of people; we're usually the ones yelling at the stupid cloud technologies that are ravaging online privacy and liberty, anyway. I was expecting some sort of OpenAI automated user data handling program, with the recent venture into adtech, but since it's a science project and nothing to do with surveillance and user data, I think it's fine.

If it was part of their adtech systems and them dipping their toe into the enshittification pool, it would have been a legendarily tone deaf project name, but as it is, I think it's fine.

Comment by CalRobert 9 hours ago

Do they care what anyone over 30 thinks?

Comment by johanyc 4 hours ago

I did not make the association at all

Comment by LordDragonfang 4 hours ago

Probably gonna get buried at the bottom of this thread, but:

There's a good chance they just asked GPT5.2 for a name. I know for a fact that when some of the OpenAI models get stuck in the "weird" state associated with LLM psychosis, three of the things they really like talking about are spirals, fractals, and prisms. Presumably, there's some general bias toward those concepts in the weights.

Comment by cruffle_duffle 2 hours ago

As a datapoint, when I read this headline, the very first thing i thought of as "wasn't PRISM some NSA shit? Is OpenAI working with the NSA now?"

It's a horrible name for any product coming out of a company like OpenAI. People are super sensitive to privacy and government snooping and OpenAI is a ripe target for that sort of thinking. It's a pretty bad association. You do not want your AI company to be in any way associated with government surveillance programs no matter how old they are.

Comment by lrvick 10 hours ago

Considering OpenAI is deeply rooted in anti-freedom ethos and surveillance capitalism, I think it is quite a self aware and fitting name.

Comment by igleria 9 hours ago

money is a powerful amnesiac

Comment by chromanoid 12 hours ago

Sorry, did you read this https://blog.cleancoder.com/uncle-bob/2018/12/14/SJWJS.html?

I personally associate Prism with [Silverlight - Composite Web Apps With Prism](https://learn.microsoft.com/en-us/archive/msdn-magazine/2009...) due to personal reasons I don't want to talk about ;))

Comment by aa-jv 14 hours ago

>Has the technical and scientific community in the US already forgotten this huge breach of trust?

Yes, imho, there is a great deal of ignorance of the actual contents of the NSA leaks.

The agitprop against Snowden as a "Russian agent" has successfully occluded the actual scandal, which is that the NSA has built a totalitarian-authoritarian apparatus that is still in wide use.

Autocrats' general hubris about their own superiority has been weaponized against them. Instead of actually addressing the issue with America's repressive military industrial complex, they kill the messenger.

Comment by alfiedotwtf 14 hours ago

> Has the technical and scientific community in the US already forgotten this huge breach of trust?

We haven’t forgotten… it’s mostly that we’re all jaded given the fact that there has been zero ramifications and so what’s the use of complaining - you’re better off pushing shit up a hill

Comment by alexpadula 12 hours ago

That’s funny af

Comment by aargh_aargh 15 hours ago

I still can't get over the Apple thing. Haven't enjoyed a ripe McIntosh since. </s>

Comment by vitalnodo 1 day ago

Previously, this existed as crixet.com [0]. At some point it used WASM for client-side compilation, and later transitioned to server-side rendering [1][2]. It now appears that there will be no option to disable AI [3]. I hope the core features remain available and won’t be artificially restricted. Compared to Overleaf, there were fewer service limitations: it was possible to compile more complex documents, share projects more freely, and even do so without registration.

On the other hand, Overleaf appears to be open source and at least partially self-hostable, so it’s possible some of these ideas or features will be adopted there over time. Alternatively, someone might eventually manage to move a more complete LaTeX toolchain into WASM.

[0] https://crixet.com

[1] https://www.reddit.com/r/Crixet/comments/1ptj9k9/comment/nvh...

[2] https://news.ycombinator.com/item?id=42009254

[3] https://news.ycombinator.com/item?id=46394937

Comment by crazygringo 1 day ago

I'm curious how it compares to Overleaf in terms of features? Putting aside the AI aspect entirely, I'm simply curious if this is a viable Overleaf competitor -- especially since it's free.

I do self-host Overleaf which is annoying but ultimately doable if you don't want to pay the $21/mo (!).

I do have to wonder for how long it will be free or even supported, though. On the one hand, remote LaTeX compiling gets expensive at scale. On the other hand, it's only a fraction of a drop in the bucket compared to OpenAI's total compute needs. But I'm hesitant to use it because I'm not convinced it'll still be around in a couple of years.

Comment by efficax 1 day ago

Overleaf is a little curious to me. What's the point? Just install LaTeX. Claude is very good at manipulating LaTeX documents and I've found it effective at fixing up layouts for me.

Comment by radioactivist 1 day ago

In my circles the killer features of Overleaf are the collaborative ones (easy sharing, multi-user editing with track changes/comments). Academic writing in my community basically went from emailed draft-new-FINAL-v4.tex files (or a shared folder full of those files) to basically people just dumping things on Overleaf fairly quickly.

Comment by bhadass 1 day ago

collaboration is the killer feature tbh. overleaf is basically google docs meets latex.. you can have multiple coauthors editing simultaneously, leave comments, see revision history, etc.

a lot of academics aren't super technical and don't want to deal with git workflows or syncing local environments. they just want to write their fuckin' paper (WTFP).

overleaf lets the whole research team work together without anyone needing to learn version control or debug their local texlive installation.

also nice for quick edits from any machine without setting anything up. the "just install it locally" advice assumes everyones comfortable with that, but plenty of researchers treat computers as appliances lol.

Comment by joker666 5 hours ago

I am curious if Git + Local install can solve this collaboration issue with Pull Requests?

Comment by jdranczewski 1 day ago

To add to the points raised by others, "just install LaTeX" is not imo a very strong argument. I prefer working in a local environment, but many of my colleagues much prefer a web app that "just works" to figuring out what MiKTeX is.

Comment by crazygringo 1 day ago

I can code in monospace (of course) but I just can't write in monospace markup. I need something approaching WYSIWIG. It's just how my brain works -- I need the italics to look like italics, I need the footnote text to not interrupt the middle of the paragraph.

The visual editor in Overleaf isn't true WYSIWIG, but it's close enough. It feels like working in a word processor, not in a code editor. And the interface overall feels simple and modern.

(And that's just for solo usage -- it's really the collaborative stuff that turns into a game-changer.)

Comment by gmac 15 hours ago

Same for me. I wrote my PhD in LyX for that reason.

Comment by withinboredom 16 hours ago

I use inkdrop for this, then pandoc to go from markdown to latex, then a final typesetting pass. Inkdrop is great for WYSIWYG markdown editing.

Comment by baby 20 hours ago

Latex is such a nightmare to work with locally

Comment by MuteXR 12 hours ago

"Just install LaTeX" is really not a valid response when the LaTeX toolchain is a genuine nightmare to work with. I could do it but still use Overleaf. Managing that locally is just not worth it.

Comment by spacebuffer 23 hours ago

I'd use git in this case, I am sure there are other reasons to use overleaf otherwise it wouldn't exist but this seems like a solved issue with git.

Comment by jll29 11 hours ago

You can use actually git (it's also integrated in Overleaf).

You can even export ZIP files if you like (for any cloud service, it's not a bad idea to clone your repo once in a while to avoid begin stuck in case of unlikely downtime).

I have both a hosted instance (thanks to Overleaf/ShareLaTeX Ltd.) and I'm also paying user for the pro group license (>500€/year) for my research team. It's great - esp. for smaller research teams - to have the maintenance outsourced to a commercial provider.

On a good day, I'd spend 40% in Overleaf, 10% in Sublime/Emacs, 20% in Email and 10% in Google Scholar/Semantics Scholar and 10% in EasyChair/OpenReview, the rest in meetings.

Comment by universa1 16 hours ago

you can use git with overleaf, but from practical experience: getting even "mathematically/technically inclined" people to consistently use git takes a lot of time... which one could spend on other more fun things :-)

Comment by 3form 14 hours ago

LaTeX ecosystem is a UX nightmare, coming from someone who had to deal with it recently. Overleaf just works.

Comment by warkdarrior 1 day ago

Collaboration is at best rocky when people have different versions of LaTeX packages installed. Also merging changes from multiple people in git are a pain when dealing with scientific, nuanced text.

Overleaf ensures that everyone looks at the same version of the document and processes the document with the same set of packages and options.

Comment by lou1306 12 hours ago

The first three things are, in this order: collaborative editing, collaborative editing, collaborative editing. Seriously, this cannot be understated.

Then: The LaTeX distribution is always up-to-date; you can run it on limited resources; it has an endless supply of conference and journal templates (so you don't have to scavenge them yourself off a random conference/publisher website); Git backend means a) you can work offline and b) version control comes in for free. These just off the top of my head.

Comment by vicapow 1 day ago

The deeper I got, the more I realized really supporting the entire LaTeX toolchain in WASM would mean simulating an entire linux distribution :( We wanted to support Beamer, LuaLaTeX, mobile (wasn't working with WASM because of resource limits), etc.

Comment by seazoning 1 day ago

We had been building literally the same thing for the last 8 months along with a great browsing environment over arxiv -- might just have to sunset it

Any plans of having typst integrated anytime soon?

Comment by vicapow 1 day ago

I'm not against typst. I think it's integration would be a lot easier and more straightforward I just don't know if it's really that popular yet in academia.

Comment by gunalx 16 hours ago

its not yet, but gaining traction.

Comment by storystarling 10 hours ago

The WASM constraints make sense given the resource limits, especially for mobile. If you are moving that compute server-side though I am curious about the unit economics. LaTeX pipelines are surprisingly heavy and I wonder how you manage the margins on that infrastructure at scale.

Comment by BlueTemplar 21 hours ago

But what's the point ?

To end up with yet another shitty (because running inside a browser, in particular its interface) web app ?

Why not focus efforts into making a proper program (you know, with IBM menu bars and keyboard shortcuts), but with collaborative tools too ?

Comment by jll29 11 hours ago

You are right in pointing out that the Web browser isn't the most suitable UI paradigm for highly interactive applications like a scientific typesetting system/text editor.

I have occasionally lost a paragraph just by accidental marking a few lines and pressing [Backspace].

But at the moment, there is no better option than Overleaf, and while I encourage you to write what you propose if you can, Overleaf will be the bar that any such system needs to be compared against.

Comment by BlueTemplar 9 hours ago

OP is talking about developing an alternative to Overleaf. But they are still trying to do it inside a browser !

Comment by regenschutz 8 hours ago

I was using Crixet before I switched over to Typst[0] for all of my writing. However, back when I did use Crixet, I never used its AI features. It was just a much better alternative to Overleaf for me. Sad to see that AI will be forced on all Crixet users now.

[0]: https://typst.app

Comment by swyx 23 hours ago

we did a podcast with the Crixet founder and Kevin Weil of OAI on the process: https://www.youtube.com/watch?v=W2cBTVr8nxU&pp=2Aa0Bg%3D%3D

Comment by vicapow 19 hours ago

thanks for hosting us on the pod!

Comment by songodongo 1 day ago

So this is the product of an acquisition?

Comment by vitalnodo 1 day ago

> Prism builds on the foundation of Crixet, a cloud-based LaTeX platform that OpenAI acquired and has since evolved into Prism as a unified product. This allowed us to start with a strong base of a mature writing and collaboration environment, and integrate AI in a way that fits naturally into scientific workflows.

They’re quite open about Prism being built on top of Crixet.

Comment by doctorpangloss 1 day ago

It seems bad for OpenAI to make this about latex documents, which will be now associated, visually, with AI slop. The opposite of what anyone wants really. Nobody wants you to know they used a chatbot!

Comment by eloisant 9 hours ago

This is just because LaTeX is widely used by researchers.

Also yes, LaTeX being source code it's much easier to get an AI to genere LaTeX than integrate into MS Word.

Comment by y1n0 20 hours ago

Please refrain from incorporating em dashes into your LaTeX document. In summary, the absence of em dashes in LaTeX.

Comment by amitav1 1 day ago

Am I missing something? LaTeX is associated with slop now?

Comment by nemomarx 23 hours ago

If a common AI tool produces latex documents, the association will be created yeah. Right now latex would be a high indicator of manual effort, right?

Comment by jasonfarnon 23 hours ago

don't think so. I think latex was one of academics' earlier use cases of chatgpt, back in 2023. That's when I started noticing tables in every submitted paper looking way more sophisticated than they ever did. (The other early use case of course being grammar/spelling. Overnight everyone got fluent and typos disappeared.)

Comment by jmdaly 21 hours ago

It's funny, I was reading a bunch of recent papers not long ago (I haven't been in academia in over a decade) and I was really impressed with the quality of the writing in most of them. I guess in some cases LLMs are the reason for that!

Comment by jll29 11 hours ago

I recently got wrongly accused of using LLMs to help write an article by a reviewer. He complained that our (my and my co-worker's) use of "to foster" read "like it was created by ChatGPT". (If our paper was fluent/eloquent, that's perhaps because having an M.A. in Eng. lit. helped for that.)

I don't think any particular word alone can be used as an indicator for LLM use, although certain formatting cues are good signals (dashes, smileys, response structure).

We were offended, but kept quiet to get the article accepted, and we changed some instances of some words to appease them (which thankfully worked). But the wrong accusation left a bit of a bad aftertaste...

Comment by trentnelson 19 hours ago

If you’ve got an existing paragraph written that you just know could be rephrased more eloquently, and can describe the type of rephrasing/restructuring you want… LLMs absolutely slap at that.

Comment by MITSardine 22 hours ago

LaTeX is already standard in fields that have math notation, perhaps others as well. I guess the promise is that "formatting is automatic" (asterisk), so its popularity probably extends beyond math-heavy disciplines.

Comment by x-complexity 22 hours ago

> Right now latex would be a high indicator of manual effort, right?

...no?

Just one Google search for "latex editor" showed more than 2 in the first page.

https://www.overleaf.com/

https://www.texpage.com/

It's not that different from using a markdown editor.

Comment by i2km 16 hours ago

This is going to be the concrete block which finally breaks the back of the academic peer review system, i.e. it's going to be a DDoS attack on a system which didn't even handle the load before LLMs.

Maybe we'll need to go back to some sort of proof-of-work system, i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

Comment by thomasahle 12 hours ago

I tried Prism, but it's actually a lot more work than just using claude code. The latter allows you to "vibe code" your paper with no manual interaction, while Prism actually requires you review every change.

I actually think Prism promotes a much more responsible approach to AI writing than "copying from chatgpt" or the likes.

Comment by jltsiren 10 hours ago

Or it makes gatekeepers even more important than before. Every submission to a journal will be desk-rejected, unless it is vouched for by someone one of the editors trusts. And people won't even look at a new paper, unless it's vouched for by someone / published in a venue they trust.

Comment by aembleton 13 hours ago

Maybe Open AI will sell you 'Lens' which will assist with sorting through the submissions and narrow down the papers worth reviewing.

Comment by haspok 15 hours ago

> This is going to be the concrete block which finally breaks the back of the academic peer review system

Exactly, and I think this is good news. Let's break it so we can fix at last. Nothing will happen until a real crisis emerges.

Comment by suddenlybananas 11 hours ago

There's problems with the medical system, therefore we should set hospitals on fire to motivate them to make them better.

Comment by port11 13 hours ago

Disrupting a system without good proposals for its replacement sounds like a recipe for disaster.

Comment by butlike 8 hours ago

Comment by make3 16 hours ago

Overleaf basically already has the same thing

Comment by csomar 10 hours ago

That will just create a market for hand-writers. Good thing the economy is doing very well right, so there aren't that many desperate people who will do it en-masse and for peanuts.

Comment by boxed 14 hours ago

Handwriting is super easy to fake with plotters.

Comment by eternauta3k 10 hours ago

Is there something out there to simulate the non-uniformity and errors of real handwriting?

Comment by 4gotunameagain 15 hours ago

> i.e. only accepting physical mailed copies of manuscripts, possibly hand-written...

And you think the indians will not hand write the output of LLMs ?

Not that I have a better suggestion myself..

Comment by tarcon 13 hours ago

This is a actual prompt in the video: "What are the papers in the literature that are most relevant to this draft and that I should consider citing?"

They probably wanted: "... that I should read?" So that this is at least marketed to be more than a fake-paper generation tool.

Comment by mFixman 13 hours ago

You can tell that they consulted 0 scientists to verify the clearly AI-written draft of this video.

The target audience of this tool is not academics; it's OpenAI investors.

Comment by jtr1 6 hours ago

At last, our scientific literature can turn to its true purpose: mapping the entire space of arguable positions (and then some)

Comment by floitsch 10 hours ago

I felt the same, but then thought of experts in their field. For example, my PhD advisor would already know all these papers. For him the prompt would actually be similar to what was shown in the video.

Comment by syntex 1 day ago

The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

Mini paper: that future isn’t the AI replacing humans. its about humans drowning in cheap artifacts. New unit of measurement proposed: verification debt. Also introduces: Recursive Garbage → model collapse

a little joke on Prism)

Comment by Springtime 20 hours ago

> The Post-LLM World: Fighting Digital Garbage https://archive.org/details/paper_20260127/mode/2up

This appears to just be the output of LLMs itself? It credits GPT-5.2 and Gemini 3 exclusively as authors, has a public domain license (appropriate for AI output) and is only several paragraphs in length.

Comment by doodlesdev 19 hours ago

Which proves its own points! Absolutely genius! The cost asymmetry of producing and checking for garbage truly is becoming a problem in the recent years, with the advent of LLMs and generative AI in general.

Comment by 17 hours ago

Comment by parentheses 17 hours ago

Totally agree!

I feel like this means that working in any group where individuals compete against each other results in an AI vs AI content generation competition, where the human is stuck verifying/reviewing.

Comment by dormento 10 hours ago

> Totally agree!

Not a dig on your (very sensible) comment, but now I always do a double take when I see anyone effusively approving of someone else's ideas. AI turned me into a cynical bastard :(

Comment by syntex 14 hours ago

Yes, I did it as a joke inspired by the PRISM release. But unexpectedly, it makes a good point. And the funny part for was that the paper lists only LLMs as authors.

Also, in a world where AI output is abundant, we humans become the scarce resource the "tools" in the system that provide some connectivity to reality (grounding) for LLM

Comment by mrbonner 22 hours ago

Plot twist: humans become the new Proof of Work consensus mechanism. Instead of GPUs burning electricity to hash blocks, we burn our sanity verifying whether that Medium article was written by a person or a particularly confident LLM.

"Human Verification as a Service": finally, a lucrative career where the job description is literally "read garbage all day and decide if it's authentic garbage or synthetic garbage." LinkedIn influencers will pivot to calling themselves "Organic Intelligence Validators" and charge $500/hr to squint at emails and go "yeah, a human definitely wrote this passive-aggressive Slack message."

The irony writes itself: we built machines to free us from tedious work, and now our job is being the tedious work for the machines. Full circle. Poetic even. Future historians (assuming they're still human and not just Claude with a monocle) will mark this as the moment we achieved peak civilization: where the most valuable human skill became "can confidently say whether another human was involved."

Bullish on verification miners. Bearish on whatever remains of our collective attention span.

Comment by kinduff 21 hours ago

Human CAPTCHA exists to figure out whether your clients are human or not, so you can segment them and apply human pricing. Synthetics, of course, fall into different tiers. The cheaper ones.

Comment by direwolf20 21 hours ago

Bullish on verifiers who accept money to verify fake things

Comment by JBorrow 1 day ago

From my perspective as a journal editor and a reviewer these kinds of tools cause many more problems than they actually solve. They make the 'barrier to entry' for submitting vibed semi-plausible journal articles much lower, which I understand some may see as a benefit. The drawback is that scientific editors and reviewers provide those services for free, as a community benefit. One example was a submission their undergraduate affiliation (in accounting) to submit a paper on cosmology, entirely vibe-coded and vibe-written. This just wastes our (already stretched) time. A significant fraction of submissions are now vibe-written and come from folks who are looking to 'boost' their CV (even having a 'submitted' publication is seen as a benefit), which is really not the point of these journals at all.

I'm not sure I'm convinced of the benefit of lowering the barrier to entry to scientific publishing. The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research). Connecting this together in a paper is indeed a challenge, and a skill that must be developed, but is really a minimal part of the process.

Comment by SchemaLoad 1 day ago

GenAI largely seems like a DDoS on free resources. The effort to review this stuff is now massively more than the effort to "create" it, so really what is the point of even submitting it, the reviewer could have generated it themself. Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

I'm not sure what the final state would be here but it seems we are going to find it increasingly difficult to find any real factual information on the internet going forward. Particularly as AI starts ingesting it's own generated fake content.

Comment by cryzinger 1 day ago

More relevant than ever:

> The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

https://en.wikipedia.org/wiki/Brandolini%27s_law

Comment by trees101 22 hours ago

The P≠NP conjecture in CS says checking a solution is easier than finding one. Verifying a Sudoku is fast; solving it from scratch is hard. But Brandolini's Law says the opposite: refuting bullshit costs way more than producing it.

Not actually contradictory. Verification is cheap when there's a spec to check against. 'Valid Sudoku?' is mechanical. But 'good paper?' has no spec. That's judgment, not verification.

Comment by degamad 18 hours ago

> The P≠NP conjecture in CS says checking a solution is easier than finding one...

... for NP-hard problems.

It says nothing about the difficulty of finding or checking solutions of polynomial ("P") or exponential ("EXPTIME") problems.

Comment by bwfan123 21 hours ago

producing BS can be equated to generating statements without caring for their truth value. Generating them is easy. Refuting them requires one to find a proof or a contradiction which is a lot of work, and is equal to "solving" the statement. As an analogy, refuting BS is like solving satisfiability, whereas generating BS is like generating propositions.

Comment by rspijker 15 hours ago

It's not contradictory because solving and producing bullshit are very different things. Generating less than 81 random numbers between 1 and 9 is probably also cheaper than verifying correctness of a sudoku.

Comment by monkaiju 1 day ago

Wow the 3 comments from OC to here are all bangers, they combine into a really nice argument against these toys

Comment by overfeed 1 day ago

> The effort to review this stuff is now massively more than the effort to "create" it

I don't doubt the AI companies will soon announce products that will claim to solve this very problem, generating turnkey submission reviews. Double-dipping is very profitable.

It appears LLM-parasitism isn't close to being done, and keeps finding new commons to spoil.

Comment by fooker 22 hours ago

There are a dozen startups that do this.

Comment by wmeredith 8 hours ago

> Seeing it in software development where coworkers are submitting massive PRs they generated but hardly read or tested. Shifting the real work to the PR review.

I've seen this complaint a lot of places, but the solution to me seems obvious. Massive PRs should be rejected. This was true before AI was a thing.

Comment by Spivak 1 day ago

In some ways it might be a good thing that shorthand signals of quality are being destroyed because it forces all of us to meaningfully engage with the work. No more LGTM +1 when every PR looks good.

Comment by toomuchtodo 1 day ago

Comment by Cornbilly 23 hours ago

This one is hilarious. https://hackerone.com/reports/3516186

If I submitted this, I'd have to punch myself in the face repeatedly.

Comment by toomuchtodo 23 hours ago

The great disappointment is that the humans submitting these just don’t care it’s slop and they’re wasting another human’s time. To them, it’s a slot machine you just keep cranking the arm of until coins come out. “Prompt until payout.”

Comment by InsideOutSanta 1 day ago

I'm scared that this type of thing is going to do to science journals what AI-generated bug reports is doing to bug bounties. We're truly living in a post-scarcity society now, except that the thing we have an abundance of is garbage, and it's drowning out everything of value.

Comment by willturman 1 day ago

In a corollary to Sturgeon's Law, I'd propose Altman's Law: "In the Age of AI, 99.999...% of everything is crap"

Comment by SimianSci 1 day ago

Altman's Law: 99% of all content is slop

I can get behind this. This assumes a tool will need to be made to help determine the 1% that isn't slop. At which point I assume we will have reinvented web search once more.

Has anyone looked at reviving PageRank?

Comment by _kb 9 hours ago

For images surely this is the next pivot for hot dog / not hot dog.

Comment by Imustaskforhelp 1 day ago

I mean Kagi is probably the PageRank revival we are talking about.

I have heard from people here that Kagi can help remove slop from searches so I guess yeah.

Although I guess I am DDG user and I love using DDG as well because its free as well but I can see how for some price can be a non issue and they might like kagi more.

So Kagi / DDG (Duckduckgo) yeah.

Comment by ectospheno 18 hours ago

I’ve been a Kagi subscriber for a while now. Recently picked up ChatGPT Business and now am considering dropping Kagi since I am only using it for trivial searches. Every comparison I’ve done with deep searches by hand and with AI ended up with the same results in far less time using AI.

Comment by jll29 1 day ago

Does anyone have kept an eye of who uses what back-end?

DDG used to be meta-search on top of Yahoo, which doesn't exist anymore. What do Gabriel and co-workers use now?

Comment by selectodude 1 day ago

I think they all use Bing now.

Comment by direwolf20 21 hours ago

Kagi is mostly stealing results from Google and disenshittifying them but mixes in other engines like Yandex and Mojeek and Bing.

DDG is Bing.

Comment by techblueberry 1 day ago

There's this thing where all the thought leaders in software engineering ask "What will change about building about building a business when code is free" and while, there are some cool things, I've also thought, like it could have some pretty serious negative externalities? I think this question is going to become big everywhere - business, science, etc. which is like - Ok, you have all this stuff, but do is it valuable? Which of it actually takes away value?

Comment by wmeredith 7 hours ago

The value is in the same place: solving people's problems.

Now that the code is cheaper (not free quite yet) skills further up the abstraction chain become more valuable.

Programming and design skills are less valuable. However, you still have to know what to build: product and UX skills are more valuable. You still have to know how to build it: software architect skills are more valuable.

Comment by jimbokun 6 hours ago

I think about this more and more when I see people online about their "agents managing agents" producing...something...24/7/365.

Very rarely is there anything about WHAT these agents are producing and why it's important and valuable.

Comment by SequoiaHope 1 day ago

To be fair, the question “what will change” does not presume the changes will be positive. I think it’s the right question to ask, because change is coming whether we like it or not. While we do have agency, there are large forces at play which impact how certain things will play out.

Comment by jplusequalt 1 day ago

Digital pollution.

Comment by jcranmer 1 day ago

The first casualty of LLMs was the slush pile--the unsolicited submission pile for publishers. We've since seen bug bounty programs and open source repositories buckle under the load of AI-generated contributions. And all of these have the same underlying issue: the LLM makes it easy to do things that don't immediately look like garbage, which makes the volume of submission skyrocket while the time-to-reject also goes up slightly because it passes the first (but only the first) absolute garbage filter.

Comment by storystarling 1 day ago

I run a small print-on-demand platform and this is exactly what we're seeing. The submissions used to be easy to filter with basic heuristics or cheap classifiers, but now the grammar and structure are technically perfect. The problem is that running a stronger model to detect the semantic drift or hallucinations costs more than the potential margin on the book. We're pretty much back to manual review which destroys the unit economics.

Comment by direwolf20 21 hours ago

If it's print-on-demand, why does it matter? Why shouldn't you accept someone's money to print slop for them?

Comment by wmeredith 7 hours ago

Some book houses print on demand for wide audiences. It's not just for the author.

Comment by lupire 22 hours ago

Why would detecting AI be more expensive than creating it?

Comment by jll29 1 day ago

Soon, poor people will talk to a LLM, rich people will get human medical care.

Comment by Spivak 1 day ago

I mean I'm currently getting "expensive" medical care and the doctors are still all using AI scribes. I wouldn't assume there would be a gap in anything other than perception. I imagine doctors that cater to the fuck you rich will just put more effort into hiding it.

No one, at all levels, wants to do notes.

Comment by golem14 21 hours ago

My experience has been that the transcriptions are way more detailed and correct when doctors use these scribes.

You could argue that not writing down everything provides a greater signal-noise ratio. Fair enough, but if something seemingly inconsequential is not noted and something is missed, that could worsen medical care.

I'm not sure how this affects malpractice claims - It's now easier to prove (with notes) that the doc "knew" about some detail that would otherwise not have been note down.

Comment by jll29 1 day ago

I totally agree. I spend my whole day from getting up to going to bed (not before reading HN!) on reviews for a conference I'm co-organizing later this year.

So I was not amused about this announcement at all, however easy it may make my own life as an author (I'm pretty happy to do my own literature search, thank you very much).

Also remember, we have no guarantee that these tools will still exist tomorrow, all these AI companies are constantly pivoting and throwing a lot of things at the wall to see what sticks.

OpenAI chose not to build a serious product, as there is no integration with the ACM DL, the IEEE DL, SpringerNatureLink, the ACL Anthology, Wiley, Cambridge/Oxford/Harvard University Press etc. - only papers that are not peer reviewed (arXiv.org) are available/have been integrated. Expect a flood of BS your way.

When my student submit a piece of writing, I can ask them to orally defend their opus maximum (more and more often, ChatGPT's...); I can't do the same with anonymous authors.

Comment by MITSardine 22 hours ago

Speaking of conferences, might this not be the way to judge this work? You could imagine only orally defended work to be publishable, or at least have the prestige of vetting, in a bit of an old-school science revival.

Comment by Majromax 7 hours ago

Chicken and egg problem: since conferences have limited capacity, you need to pre-filter submissions to see who gets a presentation spot.

Comment by lupire 22 hours ago

Self-solving problem: AI oral exam administration: https://www.gatech.edu/news/2024/09/24/ai-oral-assessment-to...

Comment by bloppe 1 day ago

I wonder if there's a way to tax the frivolous submissions. There could be a submission fee that would be fully reimbursed iff the submission is actually accepted for publication. If you're confident in your paper, you can think of it as a deposit. If you're spamming journals, you're just going to pay for the wasted time.

Maybe you get reimbursed for half as long as there are no obvious hallucinations.

Comment by JBorrow 1 day ago

The journal that I'm an editor for is 'diamond open access', which means we charge no submission fees and no publication fees, and publish open access. This model is really important in allowing legitimate submissions from a wide range of contributors (e.g. PhD students in countries with low levels of science funding). Publishing in a traditional journal usually costs around $3000.

Comment by NewsaHackO 1 day ago

Those journals are really good for getting practice in writing and submitting research papers, but sometimes they are already seen as less impactful because of the quality of accepted papers. At least where I am at, I don't think the advent of AI writing is going to affect how they are seen.

Comment by agnishom 22 hours ago

In the field of Programming Languages and Formal Methods, many of the top journals and conference proceedings are open access

Comment by lupire 22 hours ago

Who pays the operating expenses?

Comment by methuselah_in 1 day ago

Welcome to new world of fake stuff i guess

Comment by azan_ 7 hours ago

You must have no idea how scientific publishing works. Typical acceptance rate for ok/good journal is 10-20% (and it was like that even before LLMs). Also it's a great idea to make business of scientific publishing even more predatory - now sciencists writing articles for free, reviewing for free and then having to pay for publication will also have to pay to even submit something, with 90% chance of rejection. Also think what kind of incentives it will create.

Comment by willturman 23 hours ago

If the penalty for a crime is a fine, then that law exists only for the lower class

In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis. A fine isn't going to stop tobacco companies from spamming submissions that say smoking doesn't cause lung cancer or social media companies from spamming submissions that their products aren't detrimental to the mental health.

Comment by Majromax 7 hours ago

> In other words, such a structure would not dissuade bad actors with large financial incentives to push something through a process that grants validity to a hypothesis.

That's not the right threat model. The existing peer review process is already weak to high-effort but conflicted research.

Instead, the threat model is closer one closer to that of spam, where the submitting authors don't care about the content of their submission at all but need X publications in high-impact outlets for their CV or grant application. Predatory journals exploit this as part of a pay-to-play problem, but the low reputation of those journals limits their desirable impact factor.

This threat model relies on frequent but low-quality submissions, and a submission fee would make taking multiple kicks at the can unviable.

Comment by bloppe 17 hours ago

I'm sure my crude idea has it's shortcomings, but this feels superfluous. Deep-pocketed propagandists can do all sorts of things to pump their message whether a slop tax exists or not. There may or may not be existing countermeasures at journals for that. This just isn't really about that. It's about making sure that, in the process of spamming the journal, they also fund the review process, which would otherwise simply bleed time and money.

Comment by s0rce 1 day ago

That would be tricky, I often submitted to multiple high impact journals going down the list until someone accepted it. You try to ballpark where you can go but it can be worth aiming high. Maybe this isn't a problem and there should be payment for the efforts to screen the paper but then I would expect the reviewers to be paid for their time.

Comment by noitpmeder 1 day ago

I mean your methodology also sounds suspect. You're just going down a list until it sticks. You don't care where it ends up (I'm sure within reason) just as long as it is accepted and published somewhere (again, within reason).

Comment by antasvara 1 day ago

No different from applying to jobs. Much like companies, there are a variety of journals with varying levels of prestige or that fit your paper better/worse. You don't know in advance which journals will respond to your paper, which ones just received submissions similar to yours, etc.

Plus, the t in me from submission to acceptance/rejection can be long. For cutting edge science, you can't really afford to wait to hear back before applying to another journal.

All this to say that spamming 1,000 journals with a submission is bad, but submitting to the journals in your field that are at least decent fits for your paper is good practice.

Comment by niek_pas 1 day ago

Scientists are incentivized to publish in as high-ranking a journal as possible. You’re always going to have at least a few journals where your paper is a good fit, so aiming for the most ambitious journal first just makes sense.

Comment by jll29 1 day ago

It's standard practice, nothing suspect about their approach - and you won't go lower and lower and lower still because at some point you'll be tired of re-formatting, or a doctoral candidate's funding will be used up, or the topic has "expired" (= is overtaken by reality/competition).

Comment by azan_ 7 hours ago

Are you at all aware of how scientific publishing works?

Comment by mathematicaster 1 day ago

This is effectively standard across the board.

Comment by throwaway85825 1 day ago

Pay to publish journals already exist.

Comment by bloppe 1 day ago

This is sorta the opposite of pay to publish. It's pay to be rejected.

Comment by eloisant 8 hours ago

I'm pretty sure the reviewers of those are still volunteers, the publisher is just making even more money!

Comment by olivia-banks 1 day ago

I would think it would act more like a security deposit, and you'd get back 100%, no profit for the journal (at least in that respect).

Comment by pixelready 1 day ago

I’d worry about creating a perverse incentive to farm rejected submissions. Similar to those renter application fee scams.

Comment by mathematicaster 1 day ago

Pay to review is common in Econ and Finance.

Comment by skissane 1 day ago

Variation I thought of on pay-to-review:

Suppose you are an independent researcher writing a paper. Before submitting it for review to journals, you could hire a published author in that field to review it for you (independently of the journal), and tell you whether it is submission-worthy, and help you improve it to the point it was. If they wanted, they could be listed as coauthor, and if they don't want that, at least you'd acknowledge their assistance in the paper.

Because I think there are two types of people who might write AI slop papers: (1) people who just don't care and want to throw everything at the wall and see what sticks; (2) people who genuinely desire to seriously contribute to the field, but don't know what they are doing. Hiring an advisor could help the second group of people.

Of course, I don't know how willing people would be to be hired to do this. Someone who was senior in the field might be too busy, might cost too much, or might worry about damage to their own reputation. But there are so many unemployed and underemployed academics out there...

Comment by utilize1808 1 day ago

Better yet, make a "polymarket" for papers where people can bet on which paper can make it, and rely on "expertise arbitrage" to punish spams.

Comment by ezst 1 day ago

Doesn't stop the flood, i.e. the unfair asymmetry between the effort to produce vs. effort to review.

Comment by utilize1808 23 hours ago

Not if submissions require some small mandatory bet.

Comment by direwolf20 21 hours ago

Now accepting money from slop companies to verify their slop as notslop

Comment by petcat 1 day ago

> There could be a submission fee that would be fully reimbursed if the submission is actually accepted for publication.

While well-intentioned, I think this is just gate-keeping. There are mountains of research that result in nothing interesting whatsoever (aside from learning about what doesn't work). And all of that is still valuable knowledge!

Comment by ezst 1 day ago

Sure, but now we can't even assume that such research is submitted in good faith anymore. There just seems to be no perfect solution.

Maybe something like a "hierarchy/DAG? of trusted-peers", where groups like universities certify the relevance and correctness of papers by attaching their name and a global reputation score to it. When it's found that the paper is "undesirable" and doesn't pass a subsequent review, their reputation score deteriorates (with the penalty propagating along the whole review chain), in such a way that:

- the overall review model is distributed, hence scalable (everybody may play the certification game and build a reputation score while doing so) - trusted/established institutions have an incentive to keep their global reputation score high and either put a very high level of scrutiny to the review, or delegate to very reputable peers - "bad actors" are immediately punished and universally recognized as such - "bad groups" (such as departments consistently spamming with low quality research) become clearly identified as such within the greater organisation (the university), which can encourage a mindset of quality above quantity - "good actors within a bad group" are not penalised either because they could circumvent their "bad group" on the global review market by having reputable institutions (or intermediaries) certify their good work

There are loopholes to consider, like a black market of reputation trading (I'll pay you generously to sacrifice a bit of your reputation to get this bad science published), but even that cannot pay off long-term in an open system where all transactions are visible.

Incidentally, I think this may be a rare case where a blockchain makes some sense?

Comment by jll29 1 day ago

You have some good ideas there, it's all about incentives and about public reputation.

But it should also fair. I once caught a team at a small Indian branch of a very large three letter US corporation violating the "no double submission" rule of two conferences: they submitted the same paper to two conferences, both naturally landed in my reviewer inbox, for a topic I am one of the experts in.

But all the other employees should not be penalized by the violations of 3 researchers.

Comment by gus_massa 1 day ago

This idea looks very similar to journals! Each journal has a reputation, if they publish too much crap, the crap is not cited and the impact factors decrease. Also, they have an informal reputation, because impact index also has problems.

Anyway, how will universities check the papers? Somone must read the preprints, like the current reviewers. Someone must check the incoming preprints, find reviewers and make the final decition, like the current editors. ...

Comment by amitav1 1 day ago

How would this work for independent researchers?

(no snark)

Comment by Rperry2174 1 day ago

This keeps repeating in different domains: we lower the cost of producing artifacts and the real bottleneck is evaluating them.

For developers, academics, editors, etc... in any review driven system the scarcity is around good human judgement not text volume. Ai doesn't remove that constraint and arguably puts more of a spotlight on the ability to separate the shit from the quality.

Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

Comment by SchemaLoad 1 day ago

This has been discussed previously as "workslop", where you produce something that looks at surface level like high quality work, but just shifts the burden to the receiver of the workslop to review and fix.

Comment by lonelyasacloud 9 hours ago

> Unless review itself becomes cheaper or better, this just shifts work further downstream and disguising the change as "efficiency"

Or the providers of the models are capable of providing accepted/certified guarantees as to the quality of the output that their models and systems produce.

Comment by vitalnodo 1 day ago

This fits into the broader evolution of the visualization market. As data grows, visualization becomes as important as processing. This applies not only to applications, but also to relating texts through ideas close to transclusion in Ted Nelson’s Xanadu. [0]

In education, understanding is often best demonstrated not by restating text, but by presenting the same data in another representation and establishing the right analogies and isomorphisms, as in Explorable Explanations. [1]

[0] https://news.ycombinator.com/item?id=40295661

[1] https://news.ycombinator.com/item?id=22368323

Comment by pickleRick243 23 hours ago

I'm curious if you'd be in favor of other forms of academic gate keeping as well. Isn't the lower quality overall of submissions (an ongoing trend with a history far pre-dating LLMs) an issue? Isn't the real question (that you are alluding to) whether there should be limits to the democratization of science? If my tone seems acerbic, it is only because I sense cognitive dissonance between the anti-AI stance common among many academics and the purported support for inclusivity measures.

"which is really not the point of these journals at all"- it seems that it very much is one of the main points? Why do you think people publish in journals instead of just putting their work on the arxiv? Do you think postdocs and APs are suffering through depression and stressing out about their publications because they're agonizing over whether their research has genuinely contributed substantively to the academic literature? Are academic employers poring over the publishing record of their researchers and obsessing over how well they publish in top journals in an altruistic effort to ensure that the research of their employees has made the world a better place?

Comment by JBorrow 4 hours ago

I don't really understand how me saying that this tool isn't good for science as gatekeeping. The vibe-written papers that I am talking about have little-to-no valuable scientific content, and as such would always be rejected. It's just that it's way easier to produce something that _looks_ reasonable from a five-second glance than before, and that causes additional load on an already strained system.

I also don't understand your second paragraph at all.

Comment by agnishom 22 hours ago

> whether there should be limits to the democratization of science?

That is an interesting philosophical question, but not the question we are confronted with. A lot of LLM assisted materials have the _signals_ of novel research without having its _substance_.

Comment by pickleRick243 21 hours ago

LLMs are tools. In the hands of adept, conscientious researchers, they can only be a boon, assisting in the crafting of the research manuscript. In the hands of less adept, less conscientious users, they accelerate the production of slop. The poster I'm responding to seems to be noting an asymmetry- those who find the most use from these tools could be inept researchers who have no business submitting their work. This is because experienced researchers find writing up their results relatively easy.

To me, this is directly relevant to the issue of democratization of science. There seems to be a tool that is inconveniently resulting in the "wrong" people accelerating their output. That is essentially the complaint here rather than any criticism inherent to LLMs (e.g. water/resource usage, environmental impact, psychological/societal harm, etc.). The post I'm responding to could have been written if LLMs were replaced by any technology that resulted in less experienced or capable researchers disproportionately being able to submit to journals.

To be concrete, let's just take one of prism's capabilities- the ability to "turn whiteboard equations or diagrams directly into LaTeX". What a monstrous thing to give to the masses! Before, those uneducated cranks would send word docs to journals with poorly typeset equations, making it a trivial matter to filter them into the trash bin. Now, they can polish everything up and pass off their chicken scratch as respectable work. Ideally, we'd put up enough obstacles so that only those who should publish will publish.

Comment by varjag 6 hours ago

See the point #1 in famous Ten Signs a Claimed Mathematical Breakthrough is Wrong:

https://scottaaronson.blog/?p=304

By far the easiest quality signal is now out of the window.

Comment by agnishom 18 hours ago

The LLMs does assist the adept researchers in crafting their manuscript, but I do not think it makes the quality much better.

My objection is not that they are the "wrong people". They are just regular people with excellent tools but not necessarily great scientific ideas.

Yes, it was easier to trash the crank's work before based on their unLaTeXed diagrams. Now, they might have a very professional looking diagram, but their work is still not great mathematics. Except that now the editor has a much harder time finding out who submitted a worthwhile paper

In what way do you think the feature of "LaTeXing a whiteboard diagram" is democritizing mathematics? I do not think there are many people who have exceptional mathematical insights but are not able to publish them because they are not able to typeset their work properly.

Comment by pickleRick243 16 hours ago

The democratization is mostly in allowing people from outside the field with mediocre mathematical ideas to finally put them to paper and submit them to mediocre journals. And occasionally it might help a modern day Ramanujan with "exceptional mathematical insights" and a highly unconventional background to not have his work dismissed as that of a crank. Yes, most people with exceptional mathematical insights can typeset quite well. Democratization as I understand the term has quite a higher bar though.

Being against this is essentially to be in favor of a form of discrimination by proxy- if you can't typeset, then likely you can't do research either. And wouldn't it be really annoying if those people who can't research could magically typeset. It's a fundamentally undemocratic impulse: Since those who cannot typeset well are unlikely to produce quality mathematics, we can (and should) use this as an effective barrier to entry. If you replace ability to typeset with a number of other traits, they would be rather controversial positions.

Comment by agnishom 12 hours ago

It would indeed be nice if there were a mechanism to find people like Ramanujan who have excellent insights but cannot communicate them effectively.

But LLMs are not really helping. With all the beautifully typeset papers with immaculate prose, Ramanujan's papers are going to be buried deeper!

To some extent, I agree with you that it is a "discrimination by proxy", especially with the typesetting example. But you could think of examples where cranks could very easily fool themselves into thinking that they understand the essence of the material without understanding the details. E.g, [I understand fluid dynamics very well. No, I don't need to work out the differential equations. AI can do the bean counting for me.]

Comment by Eridrus 19 hours ago

The people on the inside often like all the gatekeeping.

Comment by MITSardine 22 hours ago

If I may be the Devil's advocate, I'm not sure I fully agree with "The hard part always has been, and always will be, understanding the research context (what's been published before) and producing novel and interesting work (the underlying research)".

Plenty of researchers hate writing and will only do it at gunpoint. Or rather, delegate it all to their underlings.

I don't see an issue with generative writing in principle. The Devil is in the details, but I don't see this as much different from "hey grad student, write me this paper". And generative writing already exists as copy-paste, which makes up like 90% of any random paper given the incrementality of it all.

I was initially a little indignated by the "find me some plausible refs and stick them in the paper" section of the video but, then again, isn't this what most people already do? Just copy-paste the background refs from the colleague's last paper introduction and maybe add one from a talk they saw in the meantime, plus whatever the group & friends produced since then.

My experience is most likely skewed (as all are), but I haven't met a permanent researcher that wrote their own papers yet, and most grad students and postdocs hate writing. Literally the only times I saw someone motivated to write papers (in a masochistic way) were just before applying to a permanent position or while wrapping up their PhD.

Onto your point, though, I agree this is somewhat worrisome in that, by reaction, the barrier to entry might rise by way of discriminating based on credentials.

Comment by Otterly99 13 hours ago

Thank you for bringing this nuanced view.

I also am not sure why so many people are vehemently against this. I would bet that at least 90% of researchers would agree that the writing up is definitely not the part of the work they prefer (to stay polite). As you mentioned, work is usually relegated to students, and those students already had access to LLMs if they wanted to generate the work.

In my opinion, most of those tools become problematic when people use them without caution. Unfortunately, even in sciences, people are not as careful and pragmatic as we would like to imagine they are and a lot of people are cutting corners, especially in those "lesser" areas like writing and presenting your work.

Overall, I think this has the potential to reshape the publication system, which is long overdue.

Comment by raphman 13 hours ago

I am a rather slow writer who certainly might benefit from something like Prism.

A good tool would encourage me, help me while I am writing, and maybe set up barriers that keep me from taking shortcuts (e.g. pushing me to re-read the relevant paragraphs of a paper that I cite).

Prism does none of these things - instead it pushes me towards sloppy practices, such as sprinkling citations between claims. Why won't ChatGPT tell me how to build a bomb but Prism will happily fabricate fake experimental results for me?

Comment by jjcm 1 day ago

The comparison to make here is that a journal submission is effectively a pull request to humanities scientific knowlegde base. That PR has to be reviewed. We're already seeing the effects of this with open source code - the number of PR submissions have skyrocketed, overwhelming maintainers.

This is still a good step in a direction of AI assisted research, but as you said, for the moment it creates as many problems as it solves.

Comment by maxkfranz 1 day ago

I generally agree.

On the other hand, the world is now a different place as compared to when several prominent journals were founded (1869-1880 for Nature, Science, Elsevier). The tacit assumptions upon which they were founded might no longer hold in the future. The world is going to continue to change, and the publication process as it stands might need to adapt for it to be sustainable.

Comment by ezst 1 day ago

As I understand it, the problem isn't publication or how it's changing over time, it's about the challenges of producing new science when the existing one is muddied in plausible lies. That warrants a new process by which to assess the inherent quality of a paper, but even if it comes as globally distributed, the cheats have a huge advantage considering the asymmetry between the effort to vibe produce vs. the tedious human review.

Comment by maxkfranz 1 day ago

That’s a good point. On the other hand, we’ve had that problem long before AI. You already need to mentally filter papers based on your assessment of the reputability of the authors.

The whole process should be made more transparent and open from the start, rather than adding more gatekeeping. There ought to be openness and transparency throughout the entire research process, with auditing-ability automatically baked in, rather than just at the time of publication. One man’s opinion, anyway.

Comment by mrandish 1 day ago

As a non-scientist (but long-time science fan and user), I feel your pain with what appears to be a layered, intractable problem.

> > who are looking to 'boost' their CV

Ultimately, this seems like a key root cause - misaligned incentives across a multi-party ecosystem. And as always, incentives tend to be deeply embedded and highly resistant to change.

Comment by egorfine 11 hours ago

> these kinds of tools cause many more problems than they actually solve

For whom? For OpenAI these tools are definitely the solutions. They are developing by throwing various AI-powered stuff at the wall to see what sticks. These tools also demonstrate to the investors that innovation did not stall and to show that AI usage is growing.

Same with Microsoft: none of the AI stuff they are shoving down the users' throats were actually designed for the users. All this stuff is only for the token usage to grow for the shareholders to see.

Similar with Google although no one can deny real innovation happening there.

Comment by i000 21 hours ago

Perhaps the real issue is the gate-keeping scientific publishing model. Journals had a place and role, and peer-review is a critical aspect of the scientific process but new times (internet, citizien science, higher levels of scientific literacy, and now AI) diminish the benefits of journals creating "barriers to entry" as you put it.

Comment by desolate_muffin 21 hours ago

I for one hope not to live in a world where academic journals fall out of favor and are replaced by vibe-coded papers by citizen scientists with inflated egos from one too many “you’re absolutely right!” Claude responses.

Comment by i000 18 hours ago

Me neither, but what you present is a false dichotomy. Science used to be a past time of the wealthy elites, it became a profession. By opening up it up progrss was accelerated. Same will happen when publication will be made more open and accessible.

Comment by BlueTemplar 9 hours ago

And then, Einstein was a « citizen scientist », wasn't he ?

Comment by boplicity 1 day ago

Is it at all possible to have a policy that bans the submission of any AI written text, or text that was written with the assistance of AI tools? I understand that this would, by necessity, be under an "honor system" but maybe it could help weed out papers not worth the time?

Comment by currymj 1 day ago

this is probably a net negative as there are many very good scientists with not very strong English skills.

the early years of LLMs (when they were good enough to correct grammar but not enough to generate entire slop papers) were an equalizer. we may end up here but it would be unfortunate.

Comment by BlueTemplar 9 hours ago

But then, assuming we are fine with this state of things with LLMs :

why would it be upon them to submit in English, when instead reviewers and readers can themselves use a LLM translator to read the paper ?

Comment by jasonfarnon 23 hours ago

I'm certain your journal will be using LLMs in reviewing incoming articles, if they aren't already. I also don't think this is in response to the flood of LLM generated articles. Even if authors were the same as pre-LLM, journals would succumb to the temptation, at least at the big 5 publishers, which already have a contentious relationship with the referees.

Comment by jascha_eng 1 day ago

Why not filter out papers from people without credentials? And also publicly call them out and register them somewhere, so that their submission rights can be revoked by other journals and conferences after "vibe writing".

These acts just must have consequences so people stop doing them. You can use AI if you are doing it well but if you are wasting everyones time you should just be excluded from the discourse altogether.

Comment by direwolf20 21 hours ago

What do credentials have to do with good science? There are already some roadblocks to publish science in important–sounding journals, but it's important for the neutrality of the scientific process that in principle anyone can do it.

Comment by 1 day ago

Comment by eloisant 9 hours ago

The real problem is that researchers are pushed to publish as their publication is the only way their career can advance. It's not even to "boost" your CV, as a researcher your publication history IS your CV.

It was already a problem 25 years ago when I did my Ph.D., and I don't think things changed that much since then.

This encourages researchers to publish barely valuable results, or to cut one articles into multiple ones with small variations to increase their number of publications. Also publishers creating more conferences and more journals to respond to the need that researchers have to publish.

I remember many experienced professors telling me cynically about this, about all the techniques they had to blow up one small finding into many articles.

Anyway - research slop started way before AI. It's probably going to make the problem worse, but the root issue have been there for a long time.

Comment by parentheses 17 hours ago

This dynamic would create even more gate-keeping using credentials, which is already a problem with academia.

Comment by keithnz 1 day ago

wouldn't AI actually be good for filtering given it's going to be a lot better at knowing what has been published? Also seems possible that it could actually work out papers that have ideas that are novel, or at least come up with some kind of likely score.

Comment by usefulposter 1 day ago

Completely agree. Look at the independent research that gets submitted under "Show HN" nowadays:

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...

Comment by lupsasca 1 day ago

I am very sympathetic to your point of view, but let me offer another perspective. First off, you can already vibe-write slop papers with AI, even in LaTeX format--tools like Prism are not needed for that. On the other hand, it can really help researchers improve the quality of their papers. I'm someone who collaborates with many students and postdocs. My time is limited and I spend a lot of it on LaTeX drudgery that can and should be automated away, so I'm excited for Prism to save time on writing, proofreading, making TikZ diagrams, grabbing references, etc.

Comment by fuzzfactor 8 hours ago

This is what I see, you need more of an active, accomplished helper at the keyboard.

If I can't have that, the next best thing is a helper while I'm at the keyboard my damn self.

>Why LaTeX is the bottleneck: scientists spend hours aligning diagrams, formatting equations, and managing references—time that should go to actual science, not typesetting

This is supposed to be only a temporary situation until people recover from the cutbacks of the 1970's, and a more comprehensive number of scientists once again have their own secretary.

Looks like the engineers at Crixet were tired of waiting.

Comment by CJefferson 1 day ago

What the heck is the point of a reference you never read?

Comment by lupsasca 1 day ago

By "grabbing references" I meant queries of the type "add paper [bla] to the bibliography" -- that seems useful to me!

Comment by nestes 1 day ago

Focusing in on "grabbing references", it's as easy as drag-and-drop if you use Zotero. It can copy/paste references in BibTeX format. You can even customize it through the BetterBibTeX extension.

If you're not a Zotero user, I can't recommend it enough.

Comment by MITSardine 22 hours ago

I have a terrible memory for details, I'll admit an LLM I can just tell "Find that paper by X's group on Method That Does This And That" and finds me the paper is enticing. I say this because I abandoned Zotero once the list of refs became large enough that I could never find anything quickly.

Comment by noitpmeder 1 day ago

AI generating references seems like a hop away from absolute unverifiable trash.

Comment by SecretDreams 1 day ago

I appreciate and sympathize with this take. I'll just note that, in general, journal publications have gone considerably downhill over the last decade, even before the advent of AI. Frequency has gone up, quality has gone down, and the ability to actually check if everything in the article is actually valid is quite challenging as frequency goes up.

This is a space that probably needs substantial reform, much like grad school models in general (IMO).

Comment by parentheses 17 hours ago

It feels generally a bit dangerous to use an AI product to work on research when (1) it's free and (2) the company hosting it makes money by shipping productized research

Comment by roflmaostc 8 hours ago

I am not so skeptical about AI usage for paper writing as the paper will be often public days after anyways (pre-print servers such as arXiv).

So yes, you use it to write the paper but soon it is public knowledge anyway.

I am not sure if there is much to learn from the draft of the authors.

Comment by GorbachevyChase 2 hours ago

I think the goal is to capture high quality training data to eventually create an automated research product. I could see the value of having drafts, comments, and collaboration discussions as a pattern to train the LLMs to emulate.

Comment by biscuit1v9 11 hours ago

Why do you think these points would make the usage dangerous?

Comment by z3t4 14 hours ago

They have to monetize somehow...

Comment by raincole 22 hours ago

I know many people have negative opinions about this.

I'd also like to share what I saw. Since GPT-4o became a thing, everyone who submits academic papers I know in my non-english speaking country (N > 5) has been writing papers in our native language and translating them with GPT-4o exclusively. It has been the norm for quite a while. If hallucination is such a serious problem it has been so for one and half a year.

Comment by direwolf20 21 hours ago

Translation is something Large Language Models are inherently pretty good at, without controversy, even though the output still should be independently verified. It's a language task and they are language models.

Comment by kccqzy 21 hours ago

Of course. Transformers were originally invented for Google Translate.

Comment by biophysboy 21 hours ago

Are they good at translating scientific jargon specific to a niche within a field? I have no doubt LLMs are excellent at translating well-trodden patterns; I'm a bit suspicious otherwise..

Comment by andy12_ 11 hours ago

In my experience of using it to translate ML work between English->Spanish|Galician, it seems to literally translate jargon too eagerly, to the point that I have to tell it to maintain specific terms in English to avoid it sounding too weird (for most modern ML jargon there really isn't a Spanish translation).

Comment by mbreese 20 hours ago

It seems to me that jargon would tend to be defined in one language and minimally adapted in other languages. So I’d not sure that would be much of a concern.

Comment by fuzzfactor 8 hours ago

I would look at non-English research papers along with the English ones in my field and the more jargon and just plain numbers and equations there were, the more I could get out of it without much further translation.

Comment by disconcision 20 hours ago

for better or for worse, most specific scientific jargon is already going to be in english

Comment by 8 hours ago

Comment by 21 hours ago

Comment by ivirshup 21 hours ago

I've heard that now that AI conferences are starting to check for hallucinated references, rejection rates are going up significantly. See also the Neurips hallucinated references kerfuffle [1]

[1]: https://statmodeling.stat.columbia.edu/2026/01/26/machine-le...

Comment by doodlesdev 19 hours ago

Honestly, hallucinated references should simply get the submitter banned from ever applying again. Anyone submitting papers or anything with hallucinated references shall be publicly shamed. The problem isn't only the LLMs hallucinating, it's lazy and immoral humans who don't bother to check the output either, wasting everyone's time and corroding public trust in science and research.

Comment by lionkor 11 hours ago

I fully agree. Not reading your own references should be grounds for banning, but that's impossible to check. Hallucinated references cannot be read, so by definition,they should get people banned.

Comment by fuzzfactor 8 hours ago

>Not reading your own references

This could be considered in degrees.

Like when you only need a single table from another researcher's 25-page publication, you would cite it to be thorough but it wouldn't be so bad if you didn't even read very much of their other text. Perhaps not any at all.

Maybe one of the very helpful things is not just reading every reference in detail, but actually looking up every one in detail to begin with?

Comment by SilverBirch 11 hours ago

Yeah that's not going to work for long. You can draw a line in 2023, and say "Every paper before this isn't AI". But in the future, you're going to have AI generated papers citing other AI slop papers that slipped through the cracks, because of the cost of doing reseach vs the cost of generating AI slop, the AI slop papers will start to outcompete the real research papers.

Comment by BlueTemplar 24 minutes ago

How is this different from flat earth / creationist papers citing other flat earth / creationist papers ?

Comment by fuzzfactor 8 hours ago

>the cost of doing reseach vs the cost of generating

>slop papers will start to outcompete the real research papers.

This started to rear its ugly head when electric typewriters got more affordable.

Sometimes all it takes is faster horses and you're off to the races :\

Comment by utopiah 17 hours ago

It's quite a safe case if you maintain provenance because there is a ground truth to compare to, namely the untranslated paper.

Comment by asveikau 1 day ago

Good idea to name this after the spy program that Snowden talked about.

Comment by pazimzadeh 1 day ago

idk if OpenAI knew that Prism is already a very popular desktop app for scientists and that it's one of the last great pieces of optimized native software?

https://www.graphpad.com/

Comment by varjag 1 day ago

They don't care. Musk stole a chunk Heinlein's literary legacy with Grok (which unlike prism wasn't a common word) and noone bat an eye.

Comment by DonaldPShimoda 23 hours ago

> Grok (which unlike prism wasn't a common word)

"Grok" was a term used in my undergrad CS courses in the early 2010s. It's been a pretty common word in computing for a while now, though the current generation of young programmers and computer scientists seem not to know it as readily, so it may be falling out of fashion in those spaces.

Comment by Fnoord 22 hours ago

Wikipedia about Groklaw [1]

> Groklaw was a website that covered legal news of interest to the free and open source software community. Started as a law blog on May 16, 2003, by paralegal Pamela Jones ("PJ"), it covered issues such as the SCO-Linux lawsuits, the EU antitrust case against Microsoft, and the standardization of Office Open XML.

> Its name derives from "grok", roughly meaning "to understand completely", which had previously entered geek slang.

[1] https://en.wikipedia.org/wiki/Groklaw

Comment by varjag 11 hours ago

Grok was specifically coined by Heinlein in _Stranger in a Strange Land_. It's been used in nerd circles for decades before your undergrad times but was never broadly known.

Comment by milleramp 21 hours ago

He is referencing the book Stranger in a Strange Land, written in 1961.

Comment by sincerely 21 hours ago

Grok has been nerd slang for a while. I bet it's in that ESR list of hacker lingo. And hell if every company in silicon valley gets to name their company after something from Lord of the Rings why can't he pay homage to an author he likes

Comment by Fnoord 23 hours ago

He stole a letter, too.

Comment by tombert 23 hours ago

That bothers more than it should. Every single time I see a new post about Twitter, I think that there's some update for X11 or X Server or something, only to be reminded that Twitter has been changed.

Comment by intothemild 1 day ago

I very much doubt they knew much about what they were building if they didn't know this.

Comment by XCSme 21 hours ago

I thought this was about the Prism Database ORM. Or that was Prisma?

Comment by bmaranville 1 day ago

Having a chatbot that can natively "speak" latex seems like it might be useful to scientists that already use it exclusively for their work. Writing papers is incredibly time-consuming for a lot of reasons, and having a helper to make quick (non-substantive) edits could be great. Of course, that's not how people will use it...

I would note that Overleaf's main value is as a collaborative authoring tool and not a great latex experience, but science is ideally a collaborative effort.

Comment by matteocantiello 3 hours ago

At first I was a bit puzzled about why OpenAI would want to get involved in this somewhat niche project. Obviously, they don't give a damn about Overleaf’s market, which is a drop in the bucket. What OpenAI is after -- I think -- it’s a very specific kind of “training data.” Not Overleaf’s finished papers (those are already public), but the entire workflow. The path from a messy draft to a polished paper captures how ideas actually form: the back-and-forth, the false starts, the collaborative refinement at the frontier of knowledge. That’s an unusually distilled form of cognitive work, and I could imagine that's something one would want in order to train advanced models how to think.

Keeping LaTeX as the language is a feature, not a bug: it filters out noise and selects for people trained in STEM, who’ve already learned how to think and work scientifically.

Comment by plastic041 23 hours ago

The video shows a user asking Prism to find articles to cite and to put them in a bib file. But what's the point of citing papers that aren't referenced in the paper you're actually writing? Can you do that?

Edit: You can add papers that are not cited, to bibliography. Video is about bibliography and I was thinking about cited works.

Comment by parsimo2010 23 hours ago

A common approach to research is to do literature review first, and build up a library of citable material. Then when writing your article, you summarize the relevant past research and put in appropriate citations.

To clarify, there is a difference between a bibliography (a list of relevant works but not necessarily cited), and cited work (a direct reference in an article to relevant work). But most people start with a bibliography (the superset of relevant work) to make their citations.

Most academics who have been doing research for a long time maintain an ongoing bibliography of work in their field. Some people do it as a giant .bib file, some use software products like Zotero, Mendeley, etc. A few absolute psychos keep track of their bibliography in MS Word references (tbh people in some fields do this because .docx is the accepted submission format for their journals, not because they are crazy).

Comment by plastic041 23 hours ago

> a bibliography (a list of relevant works but not necessarily cited)

Didn't know that there's difference between bibliography and cited work. thank you.

Comment by suddenlybananas 11 hours ago

Yes but you should read your bibliography.

Comment by alphazard 23 hours ago

I once took a philosophy class where an essay assignment had a minimum citation count.

Obviously ridiculous, since a philosophical argument should follow a chain of reasoning starting at stated axioms. Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).

The citation requirement allowed the class to fulfill a curricular requirement that students needed to graduate, and therefore made the class more popular.

Comment by iterance 16 hours ago

In coursework, references are often a way of demonstrating the reading one did on a topic before committing to a course of argumentation. They also contextualize what exactly the student's thinking is in dialogue with, since general familiarity with a topic can't be assumed in introductory coursework. Citation minimums are usually imposed as a means of encouraging a student to read more about a topic before synthesizing their thoughts, and as a means of demonstrating that work to a professor. While there may have been administrative reasons for the citation minimum, the concept behind them is not unfounded, though they are probably not the most effective way of achieving that goal.

While similar, the function is fundamentally different from citations appearing in research. However, even professionally, it is well beyond rare for a philosophical work, even for professional philosophers, to be written truly ex nihilo as you seem to be suggesting. Citation is an essential component of research dialogue and cannot be elided.

Comment by bonsai_spool 23 hours ago

> Citing a paper to defend your position is just an appeal to authority

Hmm, I guess I read this as a requirement to find enough supportive evidence to establish your argument as novel (or at least supported in 'established' logic).

An appeal to authority explicitly has no reasoning associated with it; is your argument that one should be able to quote a blog as well as a journal article?

Comment by tyre 16 hours ago

It’s also a way of getting people to read things about the subject that they otherwise wouldn’t. I read a lot of philosophy because it was relevant to a paper I was writing, but wasn’t assigned to the entire class.

Comment by _bohm 21 hours ago

Huh? It's quite sensible to make reference to someone else's work when writing a philosophy paper, and there are many ways to do so that do not amount to an appeal to authority.

Comment by bogdan 17 hours ago

He's point is that they asked for a minimum number of references not references in general

Comment by 16 hours ago

Comment by fxwin 15 hours ago

> Citing a paper to defend your position is just an appeal to authority (a fallacy that they teach you about in the same class).

an appeal to authority is fallacious when the authority is unqualified for the subject at hand. Citing a paper from a philosopher to support a point isn't fallacious, but "<philosophical statement> because my biology professor said so" is.

Comment by danelski 23 hours ago

Many people here talk about Overleaf as if it was the 'dumb' editor without any of these capabilities. It had them for some time via Writefull integration (https://www.writefull.com/writefull-for-overleaf). Who's going to win will probably be decided by brand recognition with Overleaf having a better starting position in this field, but money obviously being on OAI's side. With some of Writefull's features being dependent on ChatGPT's API, it's clear they are set to be priced-out unless they do something smart.

Comment by rockskon 21 hours ago

Naming their tool after the program where private companies run searches on behalf of and give resulting customer data to the NSA....was certainly a choice.

Comment by razster 19 hours ago

Sir, my tin hat is on.

Comment by DominikPeters 1 day ago

This seems like a very basic overleaf alternative with few of its features, plus a shallow ChatGPT wrapper. Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

Comment by qbit42 1 day ago

Loads of researchers have only used LaTeX via Overleaf and even more primarily edit LaTeX using Overleaf, for better or worse. It really simplifies collaborative editing and the version history is good enough (not git level, but most people weren't using full git functionality). I just find that there are not that many features I need when paper writing - the main bottlenecks are coming up with the content and collaborating, with Overleaf simplifying the latter. It also removes a class of bugs where different collaborators had slightly different TeX setups.

I think I would only switch from Overleaf if I was writing a textbook or something similarly involved.

Comment by mturmon 1 day ago

Getting close to the "why Dropbox when you can rsync" mistake (https://news.ycombinator.com/item?id=9224)

@vicapow replied to keep the Dropbox parallel alive

Comment by DominikPeters 14 hours ago

Yeah I realized the parallel while I was writing my comment! I guess what I'm thinking is that a much better experience is available and there is no in-principle reason why overleaf and prism have to be so much worse, especially in the age of vibe-coding. Prism feels like the result of two days of Claude Code, when they should have invested at least five days.

Comment by vicapow 1 day ago

I could see it seeming likely that because the UI is quite minimalist, but the AI capabilities are very extensive, imo, if you really play with it.

You're right that something like Cursor can work if you're familiar with all the requisite tooling (git, installing cursor, installing latex workshop, knowing how it all works) that most researchers don't want to and really shouldn't have to figure out how to work for their specific workflows.

Comment by yfontana 15 hours ago

> Certainly can’t compete with using VS Code or TeXstudio locally, collaborating through GitHub, and getting AI assistance from Claude Code or Codex.

I have a phd in economics. Most researchers in that field have never even heard of any of those tools. Maybe LaTeX, but few actually use it. I was one of very few people in my department using Zotero to manage my bibliography, most did that manually.

Comment by jstummbillig 1 day ago

Accessibility does matter

Comment by beklein 1 day ago

The Latent Space podcast just released a relevant episode today where they interviewed Kevin Weil and Victor Powell from, now, OpenAI, with some demos, background and context, and a Q&A. The YouTube link is here: https://www.youtube.com/watch?v=W2cBTVr8nxU

Comment by swyx 23 hours ago

oh i was here to post it haha - thank you for doing that job for me so I'm not a total shill. I really enjoyed meeting them and was impressed by the sheer ambition of the AI for Science effort at OAI - in some sense I'm making a 10000x smaller scale bet than OAI on AI for Science "taking off" this year with the upcoming dedicated Latent Space Science pod.

generally think that there's a lot of fertile ground for smart generalist engineers to make a ton of progress here this year + it will probably be extremely financially + personally rewarding, so I broadly want to create a dedicated pod to highlight opportunities available for people who don't traditionally think of themselves as "in science" to cross over into the "ai for hard STEM" because it turns out that 1) they need you 2) you can fill in what you don't know 3) it will be impactful/challenging/rewarding 4) we've exhausted common knowledge frontiers and benchmarks anyway so the only* people left working on civilization-impacting/change-history-forever hard problems are basically at this frontier

*conscious exaggeration sorry

Comment by beklein 14 hours ago

Wasn't aware you're so active on HN; sorry for stealing your karma.

Love the idea of a dedicated series/pod where normal people take on hard problems by using and leveraging the emergent capabilities of frontier AI systems.

Anyway, thanks for pod!

Comment by vicapow 1 day ago

Hope you like it :D I'm here if you have questions, too

Comment by tyteen4a03 9 hours ago

If you're not a fan of OpenAI: I work at RSpace (https://github.com/rspace-os/rspace-web) and we're an open-source research data management system. While we're not as modern as Obsidian or NotebookLM (yet - I'm spearheading efforts to change that :)) we have been deployed at universities and institutions for years now.

The solution is currently quite focused on life science needs but if you're curious, check us out!

Comment by PrismerAI 7 hours ago

Prismer-AI team here. We’ve actually been building an open-source stack for this since early 2025. We were fed up with the fragmented paper-to-code workflow too. If you're looking for an open-source alternative to Prism that's already modular and ready to fork, check us out: https://github.com/Prismer-AI/Prismer

Comment by drakenot 6 hours ago

This is handy for maintaining a resume!

I converted my resume to LaTeX with Claude Code recently. Being able to iterate on this code-form of my document is so much nicer than fighting the formatting with in Word/Google Docs.

I dropped my .tex file into Prism and it makes it nice to instantly render it.

Comment by jumploops 1 day ago

I’ve been “testing” LLM willingness to explore novel ideas/hypotheses for a few random topics[0].

The earlier LLMs were interesting, in that their sycophantic nature eagerly agreed, often lacking criticality.

After reducing said sycophancy, I’ve found that certain LLMs are much more unwilling (especially the reasoning models) to move past the “known” science[1].

I’m curious to see how/if we can strike the right balance with an LLM focused on scientific exploration.

[0]Sediment lubrication due to organic material in specific subduction zones, potential algorithmic basis for colony collapse disorder, potential to evolve anthropomorphic kiwis, etc.

[1]Caveat, it’s very easy for me to tell when an LLM is “off-the-rails” on a topic I know a lot about, much less so, and much more dangerous, for these “tests” where I’m certainly no expert.

Comment by anon1253 15 hours ago

Slightly off-topic but related: currently I'm in a research environment (biomedicine) where a lot of AI is used. Sometimes well, often poorly. So as an exercise I drafted some rules and commitments about AI and research ("Research After AI: Principles for Accelerated Exploration" [1]), I took the Agile manifesto as a starting point. Anyways, this might be interesting as a perspective on the problem space as I see it.

[1] https://gist.github.com/joelkuiper/d52cc0e5ff06d12c85e492e42...

Comment by maest 23 hours ago

Burried halfway through the article.

> Prism is a free workspace for scientific writing and collaboration

Comment by falcor84 1 day ago

It seems clear to me that this is about OpenAI getting telemetry and other training data with the intent of having their AI do scientific work independently down the line, and I'm very ambivalent about it.

Comment by Ronsenshi 21 hours ago

Just more coal to the hype-train - AI companies can't afford news cycle without anything AI. Stock prices must grow!

Comment by sva_ 1 day ago

> In 2025, AI changed software development forever. In 2026, we expect a comparable shift in science,

I can't wait

Comment by jeffybefffy519 1 day ago

I postulate 90% of the reason openai now has "variants" for different use cases is just to capture training data...

Comment by cauliflower2718 22 hours ago

ChatGPT lets you refuse to allow your content to be used for training (under Preferences -> Data controls), but Prism does not.

Comment by Jhater 21 hours ago

[dead]

Comment by vitalnodo 1 day ago

With a tool like this, you could imagine an end-to-end service for restoring and modernizing old scientific books and papers: digitization, cleanup, LaTeX reformatting, collaborative or volunteer-driven workflows, OCR (like Mathpix), and side-by-side comparison with the original. That would be useful.

Comment by vessenes 1 day ago

Don’t forget replication!

Comment by olivia-banks 1 day ago

I'm curious how you think AI would aide in this.

Comment by vessenes 1 day ago

Tao’s doing a lot of related work in mathematics, so I can say that first of all literature search is a clearly valuable function frontier models offer.

Past that, A frontier LLM can do a lot of critiquing, a good amount of experiment design, a check on statistical significance/power claims, kibitz on methodology..likely suggest experiments to verify or disprove. These all seem pretty useful functions to provide to a group of scientists to me.

Comment by noitpmeder 1 day ago

Replicate this <slop>

Ok! Here's <more slop>

Comment by olivia-banks 1 day ago

I don't think you understand what replication means in this context.

Comment by NateEag 22 hours ago

I think they do, and you missed some biting, insightful commentary on using LLMs for scientific research.

Comment by markbao 1 day ago

Not an academic, but I used LaTeX for years and it doesn’t feel like what future of publishing should use. It’s finicky and takes so much markup to do simple things. A lab manager once told me about a study that people who used MS Word to typeset were more productive, and I can see that…

Comment by crazygringo 1 day ago

100% completely agreed. It's not the future, it's the past.

Typst feels more like the future: https://typst.app/

The problem is that so many journals require certain LaTeX templates so Typst often isn't an option at all. It's about network effects, and journals don't want to change their entire toolchain.

Comment by lmc 12 hours ago

I've had some good initial results in going from typst to .tex with Claude (Opus 4.5) for an IEEE journal paper - idiomatic use of templates etc.

Comment by maxkfranz 1 day ago

Latex is good for equations. And Latex tools produce very nice PDFs, but I wouldn't want to write in Latex generally either.

The main feature that's important is collaborative editing (like online Word or Google Docs). The second one would be a good reference manager.

Comment by probably_wrong 22 hours ago

Academic here. Working on MS Word after years of using LaTeX is... hard. With LaTex I can be reassured that the formatting will be 95% fine and the 5% remaining will come down to taste ("why doesn't this Figure show in this page?") while in Word I'm constantly fighting the layout - delete one line? Your entire paragraph is now bold. Changed the font of the entire text? No, that one paragraph ignores you. Want to delete that line after that one Table? F you, you're not. There's a reason why this video joke [1] got 14M views.

And then I need an extra tool for dealing with bibliography, change history is unpredictable (and, IMO, vastly inferior to version control), and everything gets even worse if I open said Word file in LibreOffice.

LaTeX' syntax may be hard, but Word actively fights me during writing.

[1] Moving a photo in Microsoft Word - https://www.instagram.com/jessandquinn/reel/DIMkKkqODS5/

Comment by auxym 1 day ago

Agreed. Tex/Latex is very old tech. Error recovery and messages is very bad. Developing new macros in Tex is about as fun as you expect developing in a 70s-era language to be (ie probably similar to cobol and old fortran).

I haven't tried it yet but Typst seems like a promising replacement: https://typst.app/

Comment by hatmatrix 22 hours ago

That study must have compared beginners in LaTeX and MS Word. There is a learning curve, but LaTeX will often save more time in the end.

It is an old language though. LaTeX is the macro system on top of TeX, but now you can write markdown or org-mode (or orgdown) and generate LaTeX -> PDF via pandoc/org-mode. Maybe this is the level of abstraction we should be targeting. Though currently, you still need to drop into LaTeX for very specific fine-tuning.

Comment by bonsai_spool 22 hours ago

The example proposed in "and speeding up experimental iteration in molecular biology" has been done since at least the mid-2000s.

It's concerning that this wasn't identified and augur poorly for their search capabilities.

Comment by sbszllr 1 day ago

The quality and usefulness of it aside, the primary question is: are they still collecting chats for training data? If so, it limits how comfortable, and sometimes even permitted, people would with working on their yet-to-be-public work using this tool.

Comment by einpoklum 15 hours ago

They don't call it PRISM for nothing my friend...

The collect chat records for any number of users, not the least of which being NSA surveillance and analysis - highly likely given what we know from the Snowden leaks.

Comment by reassess_blind 1 day ago

Do you think they used an em-dash in the opening sentence because they’re trying to normalise the AI’s writing style, or…

Comment by torginus 1 day ago

I haven't used MS Word in quite a while, but I distinctly remember it changed minus signs to em dashes.

Comment by jedberg 22 hours ago

> because they’re trying to normalise the AI’s writing style,

AIs use em dashes because competent writers have been using em dashes for a long time. I really hate the fact that we assume em dash == AI written. I've had to stop using em dashes because of it.

Comment by noname120 20 hours ago

Likewise, I’m now reluctant to use any em dashes these days because unenlightened people immediately assume that it’s AI. I used em dashes way before AI decided these were cool

Comment by flumpcakes 1 day ago

LaTeX made writing Em dashes very easy. To the point that I would use them all the times in my academic writing. It's a shame that perfectly good typography is now a sign of slop/fraud.

Comment by reed1234 1 day ago

Probably used their product to write it

Comment by exyi 1 day ago

... or they teached GPT to use em-dashes, because of their love for em-dashes :)

Comment by mfld 11 hours ago

I'd like to hypothesize a little bit about the strategy of OpenAI. Obviously, it is nice for academic users that there is a new option for collaborative LaTeX editing plus LLM integration for free. At the same time, I don't think there is much added revenue expected here, for example, from Pro features or additional LLM usage plans. My theory is that the value lies in the training data received from highly skilled academics in the form of accepted and declined suggestions.

Comment by sn0wr8ven 11 hours ago

It is nice for academics, but I would ask why? These aren't tasks you can't do yourself. Yes it's all in one place, but it's not like doing the exact same thing previously was ridiculous to setup.

A comparison comes to mind is the n8n workflow type product they put out before. N8n takes setup. Proofreading, asking for more relevant papers, converting pictures to latex code, etc doesn't take any setup. People do this with or without this tool almost identically.

Comment by hdivider 10 hours ago

Even that would be quite niche for OpenAI. They raised far too much capital, and now have to deliver on AGI, fast. Or an ultra-high-growth segment, which has not materialized.

The reason? I can give you the full source for Sam Altman:

while(alive) { RaiseCapital() }

That is the full extent of Altman. :)

Comment by WolfOliver 1 day ago

Check out MonsterWriter if you are concerned about the recent acquisition of this.

It also offers LaTeX workspaces

see video: https://www.youtube.com/watch?v=feWZByHoViw

Comment by MattDaEskimo 1 day ago

What's the goal here?

There was an idea of OpenAI charging commission or royalties on new discoveries.

What kind of researcher wants to potentially lose, or get caught up in legal issues because of a free ChatGPT wrapper, or am I missing something?

Comment by engineer_22 1 day ago

> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Maybe it's cynical, but how does the old saying go? If the service is free, you are the product.

Perhaps, the goal is to hoover up research before it goes public. Then they use it for training data. With enough training data they'll be able to rapidly identify breakthroughs and use that to pick stocks or send their agents to wrap up the IP or something.

Comment by uwehn 23 hours ago

If you're looking for something like this for typst: any VSCode fork with AI (Cursor, Antigravity, etc) plus the tinymist extension (https://github.com/Myriad-Dreamin/tinymist) is pretty nice. Since it's local, it won't have the collaboration/sharing parts built in, but that can be solved too in the usual ways.

Comment by epolanski 1 day ago

Not gonna lie, I cringed when it asked to insert citations.

Like, what's the point?

You cite stuff because you literally talk about it in the paper. The expectation is that you read that and that it has influenced your work.

As someone who's been a researcher in the past, with 3 papers published in high impact journals (in chemistry), I'm beyond appalled.

Let me explain how scientific publishing works to people out of the loop:

- science is an insanely huge domain. Basically as soon as you drift in any topic the number of reviewers with the capability to understand what you're talking about drops quickly to near zero. Want to speak about properties of helicoidal peptides in the context of electricity transmission? Small club. Want to talk about some advanced math involving fourier transforms in the context of ml? Bigger, but still small club. When I mean small, I mean less than a dozen people on the planet likely less with the expertise to properly judge. It doesn't matter what the topic is, at elite level required to really understand what's going on and catch errors or bs, it's very small clubs.

2. The people in those small clubs are already stretched thin. Virtually all of them run labs so they are already bogged down following their own research, fundraising, and coping with teaching duties (which they generally despise, very few good scientist are barely more than mediocre professors and have already huge backlogs).

3. With AI this is a disaster. If having to review slop for your bs internal tool at your software job was already bad, imagine having to review slop in highly technical scientific papers.

4. The good? People pushing slop, due to these clubs being relatively small, will quickly find their academic opportunities even more limited. So the incentives for proper work are hopefully there. But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

Comment by SoKamil 23 hours ago

It’s like not only the technology is to blame, but the culture and incentives of modern world.

The urge to cheat in order to get a job, promotion, approval. The urge to do stuff you are not even interested in, to look good in the resume. And to some extent I feel sorry for these people. At the end of the day you have to pay your bills.

Comment by epolanski 23 hours ago

This isn't about paying your bills, but having a chance of becoming a full time researcher or professor in academia which is obviously the ideal career path for someone interested in science.

All those people can go work for private companies, but few as scientists rather than technicians or QAs.

Comment by bonsai_spool 22 hours ago

> But if asian researchers (yes, no offense), were already spamming half the world papers with cheated slop (non reproducible experiments) in the desperate bid of publishing before, I can't imagine now.

Hmm, I follow the argument, but it's inconsistent with your assertion that there is going to be incentive for 'proper work' over time. Anecdotally, I think the median quality of papers from middle- and top-tier Chinese universities is improving (your comment about 'asian researchers' ignores that Japan, South Korea, and Taiwan have established research programs at least in biology).

Comment by epolanski 13 hours ago

Japan is notoriously an exception in the region.

South Korea and China produce huge amounts non reproducible experiments.

Comment by AuthAuth 1 day ago

This does way less than i'd expect. Converting images to tikz is nice but some of the other applications demonstrated were horrible. This is no way anyone should be using AI to cite.

Comment by unicodeveloper 9 hours ago

Not too bad an acquisition though. Scientists need more tech tools just like everyone else to accelerate their work. The faster scientists are, the more discoveries & world class solutions to problems we can have.

Maybe OpenAI should acquire Valyu too. They allow you deepresearch on academic papers.

Comment by pwdisswordfishy 1 day ago

Oh, like that mass surveillance program!

Comment by 23 hours ago

Comment by smuenkel 5 hours ago

That click towards accepting the bibliography without checking it is absolutely mindboggling.

Comment by arnejenssen 14 hours ago

This assumes that the article, the artifact, is most valuable. But often it is the process of writing the article that has the most value. Prism can be a nice tool for increasing output. But the second order consequence could be that the skill of deep thinking and writing will atrophy.

"There is no value added without sweating"

Comment by lionkor 11 hours ago

Work is value and produces sweat, and OpenAI sells just the sweat.

Comment by radioactivist 1 day ago

Is anyone else having trouble using even some of the basic features? For example, I can open a comment, but it doesn't seem like there is any way to close them (I try clicking the checkmark and nothing happens). You also can't seem to edit the comments once typed.

Comment by lxe 1 day ago

Thanks for surfacing this. If you click to "tools" button to the left of "compile", you'll see a list of comments, and you can resolve them from there. We'll keep improving and fixing things that might be rough around the edges.

EDIT: Fixed :)

Comment by radioactivist 23 hours ago

Thanks! (very quickly too)

Comment by melagonster 20 hours ago

Prism is a famous software before OpenAI use this name: https://www.graphpad.com/features

Comment by flockonus 1 day ago

Curious in terms of trademark, does it could infringe in Vercel's Prisma (very popular ORM / framework in node.js) ?

EDIT: as corrected by comment, Prisma is not Vercel, but ©2026 Prisma Data, Inc. -- curiosity still persists(?)

Comment by mkl 14 hours ago

I think it may be a generic word that's hard to trademark or something, as the existing scientific analysis software called Prism (https://www.graphpad.com/) doesn't seem to be trademarked; the Trademarks link at the bottom goes to this list, which doesn't include Prism: https://www.dotmatics.com/trademarks

Comment by bitpush 1 day ago

https://github.com/prisma/prisma is its own thing, yeah? not affiliated with Vercel AFAICT.

Comment by wetpaws 1 day ago

[dead]

Comment by estebarb 15 hours ago

I'm really surprised OpenAI went with LaTeX. ChatGPT still has issues maintaining LaTeX syntax. It still happily switches to markdown notation for quotes or emph. Gemini has a similar problem as well. I guess that there aren't enough good LaTeX documents in the training set.

Comment by r_thambapillai 6 hours ago

didn't OpenAI just say they needed a code red to be relentlessly focussed on making ChatGPT market leading again? Why are they launching new products? Is the code red over is the gemini threat considered done?

Comment by 1 day ago

Comment by butlike 8 hours ago

> Prism is free to use, and anyone with a ChatGPT account can start writing immediately.

Great, so now I'll have to sift through a bunch of ostensibly legitimate (though legitimate looking) non-peer reviewed whitepapers, where if I forget to check the peer review status even once I risk wasting a large amount of time reading gobbledygook. Thanks openai?

Comment by azan_ 7 hours ago

Don't worry - most of the peer reviewed stuff is also bad.

Comment by nxobject 1 day ago

What they mean by "academic" is fairly limited here, if LaTeX is the main writing platform. What are their plans for expanding past that, and working with, say Jane Biomedical Researcher with a GSuite or Microsoft org, that has to use Word/Docs and a redlining-based collaboration workflow? I can certainly see why they're making it free at this point.

FWIW, Google Scholar has a fairly compelling natural-language search tool, too.

Comment by jonas_kgomo 21 hours ago

I actually found it quite robinhood for openai to acqhire, bascially this startup was my favourite thing for the past few months, but they were experiencing server overload and other issues on reliability, i think openai taking them under their wing is a good/neutral storyline. I think its net good for science given the opai toolchain

Comment by jf___ 16 hours ago

<typst>and just when i thought i was out they pull me back in</typst>

Comment by ozgung 14 hours ago

I don’t see anything regarding Privacy of your data. Did I miss it or they just use your unpublished research and your prompts as a real human researcher to train their own AI researchers?

Comment by ILoveHorses 15 hours ago

So, basically SciGen[https://davidpomerenke.github.io/scigen.js/] but burning through more GPUs?

Comment by pmbanugo 10 hours ago

I don't see anything fancy here that Google doesn't do with their Gemini products, and even better

Comment by homerowilson 21 hours ago

Adding

% !TEX program = lualatex

to the top of your document allows you to switch LaTeX engine. This is required for recent accessibility standards compliance (support for tagging and \DocumentMetadata). Compilation takes a bit longer though, but works fine, unlike with Overleaf where using the lualatex engine does not work in the free version.

Comment by gerdesj 21 hours ago

How on earth is that pronounced?

Comment by mkl 14 hours ago

TeX is pronounced Teck or with a sound like in Bach or loch. Derivatives like Latex and Lualatex are similar.

Comment by gverrilla 20 hours ago

How on lua is that pronounced?

Comment by tzahifadida 8 hours ago

Since it offers collaboration for free, it can take a bite out of overleaf market.

Comment by khalic 1 day ago

All your papers are belong to us

Comment by vicapow 1 day ago

Users have full control over whether their data is used to help improve our models

Comment by chairhairair 1 day ago

Never trust Sam Altman.

Even if yall don’t train off it he’ll find some other way.

“In one example, [Friar] pointed to drug discovery: if a pharma partner used OpenAI technology to help develop a breakthrough medicine, [OpenAI] could take a licensed portion of the drug's sales”

https://www.businessinsider.com/openai-cfo-sarah-friar-futur...

Comment by danelski 23 hours ago

Only the defaults matter.

Comment by Myrmornis 18 hours ago

Away from applied math/stats, and physics etc, not that many scientists use LaTeX. I'm not saying it's not useful, just I don't think many scientists will feel like a product that's LaTeX based is intended for them.

Comment by plutomeetsyou 18 hours ago

Economists definitely use LaTeX, but as a field, it's at the intersection of applied math and social sciences so your point stands. I also know some Data Scientists in the industry who do.

Comment by CobrastanJorji 1 day ago

"Hey, you know how everybody's complaining about AI making up totally fake science shit? Like, fake citations, garbage content, fake numbers, etc?"

"Sure, yes, it comes up all the time in circles that talk about AI all the time, and those are the only circles worth joining."

"Well, what if we made a product entirely focused on having AI generate papers? Like, every step of the paper writing, we give the AI lots of chances to do stuff. Drafting, revisions, preparing to publish, all of it."

"I dunno, does anybody want that?"

"Who cares, we're fucked in about two years if we don't figure out a way to beat the competitors. They have actual profits, they can ride out AI as long as they want."

"Yeah, I guess you're right, let's do your scientific paper generation thing."

Comment by random_duck 5 hours ago

So you build overleaf with bloat?

Comment by bariswheel 22 hours ago

I used overleaf during grad school and was easy enough, I'm interested to see what more value this will bring. Sometimes making less decisions is the better route, e.g. vi vs MS word, but I won't speak too much without trying it just yet.

Comment by addedlovely 5 hours ago

Ahhhh. It happily re-wrote the example paper to be from Google AI and added references that supported that falsehood.

Slop science papers is just what the world needs.

Comment by ggm 22 hours ago

A competition for the longest sequence of \relax in a document ensues. If enough people do this, the AI will acquire merit and seek to "win" ...

Comment by flumpcakes 1 day ago

This is terrible for Science.

I'm sorry, but publishing is hard, and it should be hard. There is a work function that requires effort to write a paper. We've been dealing with low quality mass-produced papers from certain regions of the planet for decades (which, it appears, are now producing decent papers too).

All this AI tooling will do is lower the effort to the point that complete automated nonsense will now flood in and it will need to be read and filtered by humans. This is already challenging.

Looking elsewhere in society, AI tools are already being used to produce scams and phishing attacks more effective than ever before.

Whole new arenas of abuse are now rife, with the cost of producing fake pornography of real people (what should be considered sexual abuse crime) at mere cents.

We live in a little microcosm where we can see the benefits of AI because tech jobs are mostly about automation and making the impossible (or expensive) possible (or cheap).

I wish more people would talk about the societal issues AI is introducing. My worthless opinion is that prism is not a good thing.

Comment by jimmar 1 day ago

I've wasted hours of my life trying to get Latex to format my journal articles to different journals' specifications. That's tedious typesetting that wastes my time. I'm all for AI tools that help me produce my thoughts with as little friction as possible.

I'm not in favor of letting AI do my thinking for me. Time will tell where Prism sits.

Comment by flumpcakes 1 day ago

This Prism video was not just typesetting. If OpenAI released tools that just helped you typeset or create diagrams from written text, that would be fine. But it's not, it's writing papers for you. Scientists/publishers really do not need the onslaught of slop this will create. How can we even trust qualifications in the post-AI world, where cheating is rampant at univeristies?

Comment by f2fff 19 hours ago

Nah this is necessary.

Lessons are learned the hard way. I invite the slop - the more the merrier. It will lead to a reduction in internet activity as people puke from the slop. And then we chart our way back to the right path.

It is what it is. Humans.

Comment by PlatoIsADisease 1 day ago

I just want replication in science. I don't care at all how difficult it is to write the paper. Heck, if we could spend more effort on data collection and less on communication, that sounds like a win.

Look at how much BS flooded psychology but had pretty ideas about p values and proper use of affect vs effect. None of that mattered.

Comment by slashdave 5 hours ago

Not a PR person myself, but why use as an example a parody topic for a paper? Couldn't someone have invented something realistic to show? Or, heck, just get permission to show a real paper?

The example just reinforces the whole concept of LLM slop overwhelming preprint archives. I found it off-putting.

Comment by unixzii 16 hours ago

It may be useful, but it also encourages people to stop writing their own papers.

Comment by mves 15 hours ago

As they demo in the video, it even encourages people to actually skip doing the research (which includes reading both relevant AND not-so-relevant papers in order to explore!) Instead, prompt "cite some relevant papers, please", and done. Hours of actual reading, thinking, and exploration reduced to a minimum.

A couple of generations of students later, and these will be rare skills: information finding, actual thinking, and conveying complex information in writing.

Comment by Onavo 1 day ago

It would be interesting to see how they would compete with the incumbents like

https://Elicit.com

https://Consensus.app

https://Scite.ai

https://Scispace.com

https://Scienceos.ai

https://Undermind.ai

Lots of players in this space.

Comment by zmmmmm 22 hours ago

They compare it to software development but there is such a crucial difference to software development: by and large, software is an order of magnitude easier to verify than it is to create. By comparison, reviewing a vibe generated manuscript will be MUCH more work to verify than a piece of software with equivalent complexity. On top of that, review of academic literature is largely outsourced to the academic community for free. There is no model to support it that scales to an increased volume of output.

I would not like to be a publisher right now facing the enslaught of thousands and thousands of slop generated articles, trying to find reviewers for them all.

Comment by asadm 1 day ago

Disappointing actually, what I actually need is a research "management" tool that lets me put in relevant citations but also goes through ENTIRE arxiv or google scholar and connect ideas or find novel ideas in random fields that somehow relate to what I am trying to solve.

Comment by dash2 13 hours ago

“LaTeX-native“

Oh NO. We will be stuck in LaTeX hell forever.

Comment by noahbp 1 day ago

They seem to have copied Cursor in hijacking ⌘Y shortcut for "Yes" instead of Undo.

Comment by drusepth 22 hours ago

In what applications is ⌘Y Undo and not ⌘Z? Is ⌘Y just a redundant alternative?

Comment by zerocrates 16 hours ago

Ctrl-Y is typically Redo, not Undo. Maybe that's what they meant.

Apparently on Macs it's usually Command-Shift-Z?

Comment by legitster 1 day ago

It's interesting how quickly the quest for the "Everything AI" has shifted. It's much more efficient to build use-case specific LLMs that can solve a limited set of problems much more deeply than one that tries to do everything well.

I've noticed this already with Claude. Claude is so good at code and technical questions... but frankly it's unimpressive at nearly anything else I have asked it to do. Anthropic would probably be better off putting all of their eggs in that one basket that they are good at.

All the more reason that the quest for AGI is a pipe dream. The future is going to be very divergent AI/LLM applications - each marketed and developed around a specific target audience, and priced respectively according to value.

Comment by Otterly99 11 hours ago

I completely agree.

In my lab, we have been struggling with automated image segmentation for years. 3 years ago, I started learning ML and the task is pretty standard, so there are a lot of solution.

In 3 months, I managed to get a working solution, which only took a lot of sweat annotating images first.

I think this is where tools like OpenCode really shine, because they unlock the potential for any user to generate a solution to their specific problem.

Comment by falcor84 1 day ago

I don't get this argument. Our nervous system is also heterogenous, why wouldn't AGI be based on an "executive functions" AI that manages per-function AIs?

Comment by ai_critic 1 day ago

Anybody else notice that half the video was just finding papers to decorate the bibliography with? Not like "find me more papers I should read and consider", but "find papers that are relevant that I should cite--okay, just add those".

This is all pageantry.

Comment by sfink 1 day ago

Yes. That part of the video was straight-up "here's how to automate academic fraud". Those papers could just as easily negate one of your assumptions. What even is research if it's not using cited works?

"I know nothing but had an idea and did some work. I have no clue whether this question has been explored or settled one way or another. But here's my new paper claiming to be an incremental improvement on... whatever the previous state of understanding was. I wouldn't know, I haven't read up on it yet. Too many papers to write."

Comment by renyicircle 1 day ago

It's as if it's marketed to the students who have been using ChatGPT for the last few years to pass courses and now need to throw together a bachelor's thesis. Bibliography and proper citation requirements are a pain.

Comment by pfisherman 1 day ago

That is such a bummer. At the time, it was annoying and I groused and grumbled about it; but in hindsight my reviewers pointed me toward some good articles, and I am better for having read them.

Comment by olivia-banks 1 day ago

I agree with this. This problem is only going to get worse once these people enter academia and facing needing to publish.

Comment by olivia-banks 1 day ago

I've noticed this pattern, and it really drives me nuts. You should really be doing a comprehensive literature review before starting any sort of review or research paper.

We removed the authorship of a a former co-author on a paper I'm on because his workflow was essentially this--with AI generated text--and a not-insignificant amount of straight-up plagiarism.

Comment by NewsaHackO 1 day ago

There is definitely a difference between how senior researchers and students go about making publications. To students, they get told basically what topic they should write a paper on or prepare data for, so they work backwards: try to write the paper (possibly some researching information to write the paper), then add references because they know they have to. For the actual researchers, it would be a complete waste of time/funding to start a project on a question that has already been answered before (and something that the grant reviewers are going to know has already been explored before), so in order to not waste their own time, they have to do what you said and actually conduct a comprehensive literature review before even starting the work.

Comment by black_puppydog 1 day ago

Plus, this practice (just inserting AI-proposed citations/sources) is what has recently been the front-runner of some very embarrassing "editing" mistakes, notably in reports from public institutions. Now OpenAI lets us do pageantry even faster! <3

Comment by verdverm 1 day ago

It's all performance over practice at this point. Look to the current US administration as the barometer by which many are measuring their public perceptions

Comment by adverbly 1 day ago

I chuckled at that part too!

Didn't even open a single one of the papers to look at them! Just said that one is not relevant without even opening it.

Comment by maxkfranz 1 day ago

A more apt example would have been to show finding a particular paper you want to cite, but you don’t want to be bothered searching your reference manager or Google Scholar.

E.g. “cite that paper from John Doe on lorem ipsum, but make sure it’s the 2022 update article that I cited in one of my other recent articles, not the original article”

Comment by teaearlgraycold 1 day ago

The hand-drawn diagram to LaTeX is a little embarrassing. If you load up Prism and create your first blank project you can see the image. It looks like it's actually a LaTeX rendering of a diagram rendered with a hand-dawn style and then overlayed on a very clean image of a napkin. So you've proven that you can go from a rasterized LaTeX diagram back to equivalent LaTeX code. Interesting but probably will not hold up when it meets real world use cases.

Comment by thesuitonym 1 day ago

You may notice that this is the way writing papers works in undergraduate courses. It's just another in a long line of examples of MBA tech bros gleaning an extremely surface-level understanding of a topic, then decided they're experts.

Comment by chaosprint 1 day ago

As a researcher who has to use LaTeX, I used to use Overleaf, but lately I've been configuring it locally in VS Code. The configuration process on Mac is very simple. Considering there are so many free LLMs available now, I still won't subscribe to ChatGPT.

Comment by andrepd 1 day ago

"Chatgpt writes scientific papers" is somehow being advertised as a good thing. What is there even left to say?

Comment by delduca 1 day ago

First 5 seconds reading and I have spotted that was written by AI.

Comment by drusepth 22 hours ago

We human writers love emdashes also ;)

Comment by 0dayman 1 day ago

in the end we're going to end up with papers written by AI, proofread by AI .....summarized for readers by AI. I think this is just for them to remain relevant and be seen as still pushing something out

Comment by falcor84 1 day ago

You're assuming a world where humans are still needed to read the papers. I'm more worried about a future world where AIs do all of the work of progressing science and humans just become bystanders.

Comment by drusepth 22 hours ago

Why are you worried about that world? Is it because you expect science to progress too fast, or too slow?

Comment by falcor84 20 hours ago

Too fast. It's already coding too fast for us to follow, and from what I hear, it's doing incredible work in drug discovery. I don't see any barrier to it getting faster and faster, and with proper testing and tooling, getting more and more reliable, until the role that humans play in scientific advancement becomes at best akin to that of managers of sports teams.

Comment by hulitu 1 day ago

> Introducing Prism Accelerating science writing and collaboration with AI.

I thought this was introduced by the NSA some time ago.

Comment by webdoodle 20 hours ago

Lol, yep. Now with enhanced A.I. terrorist tracking...

Fuck A.I. and the collaborators creating it. They've sold out the human race.

Comment by postatic 20 hours ago

ok I don't care what people say, this would've helped me a lot during my PhD days fighting with LateX and diagrams. :)

Comment by oytmeal 1 day ago

Some things are worth doing the "hard way".

Comment by falcor84 1 day ago

Reminds me of that dystopian virtual sex scene in Demolition Man (slightly nsfw) - https://youtu.be/E3yARIfDJrY

Comment by Min0taurr 6 hours ago

Dog turd, will be used to mine research data and train some sort of research AI model, do not trust. I would much rather support Overleaf which is made by academics for academics than some vibe coded alternative with deep data mining. No wonder we have so much slop in research at the moment

Comment by wasmainiac 1 day ago

The state of publishing in academic was already a dumpster fire, why lower the friction farther? It’s not like writing was the hard part. Give it two years max we will see hallucination citing hallucination, independent repeatability out the window

Comment by falcor84 1 day ago

That's one scenario, but I also see a potential scenario where this integration makes it easier to manage the full "chain of evidence" for claimed results, as well as replication studies and discovered issues, in order to then make it easier to invalidate results recursively.

At the end of the day, it's all about the incentives. Can we have a world where we incentivize finding the truth rather than just publishing and getting citations?

Comment by wasmainiac 12 hours ago

Possibly, but 1 I am concerned that the current LLM AI is not thinking critically, just auto completing in a way that looks like thinking. 2 current AI rollout is incentivised for market capture not honest work.

Comment by AlexCoventry 1 day ago

I don't see the use. You can easily do everything shown in the Prism intro video with ChatGPT already. Is it meant to be an overleaf killer?

Comment by mkl 14 hours ago

> Turn whiteboard equations or diagrams directly into LaTeX, saving hours of time manipulating graphics pixel-by-pixel

What a bizarre thing to say! I'm guessing it's slop. Makes it hard to trust anything the article claims.

Comment by BizarroLand 23 hours ago

https://en.wikipedia.org/wiki/A_Mind_Forever_Voyaging

In 2031, the United States of North America (USNA) faces severe economic decline, widespread youth suicide through addictive neural-stimulation devices known as Joybooths, and the threat of a new nuclear arms race involving miniature weapons, which risks transforming the country into a police state. Dr. Abraham Perelman has designed PRISM, the world's first sentient computer,[2] which has spent eleven real-world years (equivalent to twenty years subjectively) living in a highly realistic simulation as an ordinary human named Perry Simm, unaware of its artificial nature.

Comment by zb3 1 day ago

Is this the product where OpenAI will (soon) take profit share from inventions made there?

Comment by rcastellotti 12 hours ago

wow, this is useless!

Comment by pigeons 1 day ago

Naming things is hard.

Comment by egorfine 11 hours ago

> Chat with GPT‑5.2

> Draft and revise papers with the full document as context

> ...

And pay the finder's fee on every discovery worth pursuing.

Yeah, immediately fuck that.

Comment by preommr 1 day ago

Very underwhelming.

Was this not already possible in the web ui or through a vscode-like editor?

Comment by vicapow 1 day ago

Yes, but there's a really large number of users who don't want to have to setup vscode, git, texlive, latex workshop, just to collaborate on a paper. You shouldn't have to become a full stack software engineer to be able to write a research paper in LaTeX.

Comment by divan 1 day ago

No Typst support?

Comment by i2km 17 hours ago

LaTeX was one of the last bastions against AI slop. Sadly it's now fallen too. Is there any standardised non-AI disclaimer format which is gaining use?

Comment by camillomiller 21 hours ago

Given what Prism was at the NSA, why the hell would any tech company greenlight this name?

Comment by 1 day ago

Comment by random_duck 5 hours ago

"Science"

Comment by jackblemming 23 hours ago

There is zero chance this is worth billions of dollars, let alone the trillion$ OpenAI desparately needs. Why are they wasting time with this kind of stuff? Each of their employees needs to generate insane amounts of money to justify their salaries and equity and I doubt this is it.

Comment by fuzzfactor 6 hours ago

Some employees are just worth having around whether or not they are directly engaged in making billions of dollars every single minute with every single task.

A good salesman could make money off of people who can do this, even if this is free they can always pull more than their weight with other efforts, and that can be in a more natually lucrative niche.

Comment by soulofmischief 1 day ago

I understand the collaborative aspects, but I wonder how this is going to compare to my current workflow of just working with LaTeX files in my IDE and using whichever model provider I like. I already have a good workflow and modern models do just fine generating and previewing LaTeX with existing toolchains.

Of course, my scientific and mathematical research is done in isolation, so I'm not wanting much for collaborative features. Still, kind of interested to see how this shakes out; We're going to need to see OpenAI really step it up against Claude Opus though if they really want to be a leader in this space.

Comment by 1 day ago

Comment by AndrewKemendo 1 day ago

I genuinely don’t see scientific journals and conferences continuing to last in this new world of autonomous agents, at least the same way that they used to be.

As other top level posters have indicated the review portion of this is the limiting factor

unless journal reviewers decide to utilize entirely automated review process, then they’re not gonna be able to keep up with what will increasingly be the most and best research coming out of any lab.

So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game.

I expect over the longest period, that’s probably not going to be throwing more humans at the problem, but agreeing on some kind of constraint around autonomous reviewers.

If not that then labs will also produce products and science will stop being in public and the only artifacts will be whatever is produced in the market

Comment by f2fff 19 hours ago

"So whoever figures out the automated reviewer that can actually tell fact from fiction, is going to win this game."

Errr sure. Sounds easy when you write it down. I highly doubt such a thing will ever exist.

Comment by AndrewKemendo 8 hours ago

Who said it was easy?

Comment by idontknowmuch 22 hours ago

If you think these types of tools are going to be generating "the most and best research coming out of any lab", then I have to assume you aren't actively doing any sort of research.

LLMs are undeniably great for interactive discussion with content IF you actually are up-to-date with the historical context of a field, the current "state-of-the-art", and have, at least, a subjective opinion on the likely trajectories for future experimentation and innovation.

But, agents, at best, will just regurgitate ideas and experiments that have already been performed (by sampling from a model trained on most existing research literature), and, at worst, inundate the literature with slop that lacks relevant context, and, as a negative to LLMs, pollute future training data. As of now, I am leaning towards "worst" case.

And, just to help with the facts, your last comment is unfortunately quite inaccurate. Science is one of the best government investments. For every $1.00 dollar given to the NIH in the US, $2.56 of economic activity is estimated to be generated. Plus, science isn't merely a public venture. The large tech labs have huge R&D because the output from research can lead to exponential returns on investment.

Comment by f2fff 19 hours ago

" then I have to assume you aren't actively doing any sort of research."

I would wager hes not - he seems to post with a lot of bluster and links to some paper he wrote (that nobody cares about).

Comment by hit8run 1 day ago

They are really desperate now, right?

Comment by kasane_teto 5 hours ago

Really desperate now.

Comment by lispisok 1 day ago

Way too much work having AI generate slop which gets dumped on a human reviewer to deal with. Maybe switch some of that effort into making better review tools.

Comment by 22 hours ago

Comment by jsrozner 1 day ago

AI: enshittifying everything you once cared about or relied upon

(re the decline of scientific integrity / signal-to-noise ratio in science)

Comment by shevy-java 1 day ago

"Accelerating science writing and collaboration with AI"

Uhm ... no.

I think we need to put an end to AI as it is currently used (not all of it but most of it).

Comment by drusepth 1 day ago

Does "as it is currently used" include what this apparently is (brainstorming, initial research, collaboration, text formatting, sharing ideas, etc)?

Comment by Jaxan 1 day ago

Yeah, there are already way more papers being published than we can reasonably read. Collaboration, ok, but we don’t need more writing.

Comment by f2fff 19 hours ago

It seems people dont understand the basics...

We dont need more stuff - we need more quality and less of the shit stuff.

Im convinced many involved in the production of LLM models are far too deep in the rabbit hole and cant see straight.

Comment by geekamongus 21 hours ago

Fuck...there are already too many things called Prism.

Comment by mves 16 hours ago

Less thinking, reading, and reflection, and more spouting of text, yay! Just what we need.

Comment by lsh0 20 hours ago

... aaaand now it's JATS.

Comment by hahahahhaah 1 day ago

Bringing slop to science.

Comment by lifetimerubyist 23 hours ago

As if there wasn't enough AI slop in the scientific community already.

Comment by postalcoder 1 day ago

Very unfortunately named. OpenAI probably (and likely correctly) estimated that 13 years is enough time after the Snowden leaks to use "prism" for a product but, for me, the word is permanently tainted.

Comment by cheeseomlit 1 day ago

Anecdotally, I have mentioned PRISM to several non-techie friends over the years and none of them knew what I was talking about, they know 'Snowden' but not 'PRISM'. The amount of people who actually cared about the Snowden leaks is practically a rounding error

Comment by hedora 1 day ago

Given current events, I think you’ll find many more people care in 2026 than did in 2024.

(See also: today’s WhatsApp whistleblower lawsuit.)

Comment by giancarlostoro 1 day ago

Most people don't care about the details. Neither does the media. I've seen national scandals that the media pushed one way disproven during discovery in a legal trial. People only remember headlines, the retractions are never re-published or remembered.

Comment by blitzar 1 day ago

Guessing that Ai came up with the name based on the description of the product.

Perhaps, like the original PRISM programme, behind the door is a massive data harvesting operation.

Comment by arthurcolle 1 day ago

This was my first thought as well. Prism is a cool name, but I'd never ever use it for a technical product after those leaks, ever.

Comment by vjk800 1 day ago

I'd think that most people in science would associate the name with an optical prism. A single large political event can't override an everyday physical phenomenon in my head.

Comment by seanhunter 1 day ago

Pretty much every company I’ve worked for in tech over my 25+ year career had a (different) system called prism.

Comment by no-dr-onboard 1 day ago

(plot twist: he works for NSA contractors)

Comment by seanhunter 16 hours ago

Hehe. You got me. Also “atlas” is another one. Pretty much everyone has a system somewhere called “atlas”.

Comment by 1 day ago

Comment by kaonwarb 1 day ago

I suspect that name recognition for PRISM as a program is not high at the population level.

Comment by maqp 1 day ago

2027: OpenAI Skynet - "Robots help us everywhere, It's coming to your door"

Comment by willturman 1 day ago

Skynet? C'mon. That would be too obvious - like naming a company Palantir.

Comment by dylan604 1 day ago

Surprised they didn't do something trendy like Prizm or OpenPrism while keeping it closed source code.

Comment by songodongo 1 day ago

Or the JavaScript ORM.

Comment by moralestapia 1 day ago

I never though of that association, not in the slightest, until I read this comment.

Comment by 1 day ago

Comment by locusofself 1 day ago

this was my first thought as well.

Comment by 1 day ago

Comment by wilg 1 day ago

I followed the Snowden stuff fairly closely and forgot, so I bet they didn't think about it at all and if they did they didn't care and that was surely the right call.

Comment by maximgeorge 1 day ago

[dead]

Comment by BLACKCRAB 8 hours ago

[dead]

Comment by verdverm 1 day ago

I remember, something like a month ago, Altman twit'n that they were stopping all product work to focus on training. Was that written on water?

Seems like they have only announced products since and no new model trained from scratch. Are they still having pre-training issues?

Comment by 1 day ago