Vercel April 2026 security incident

Posted by colesantiago 1 day ago

Counter856Comment486OpenOriginal

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

Comments

Comment by Vates 1 day ago

When one OAuth token can compromise dev tools, CI pipeline, secrets and deployment simultaneously, something architectural has gone wrong. Vercel have had React2Shell (CVSS 10), the middleware bypass (CVSS 9.1), and now this, all within 12 months.

At what point do we start asking questions about the concentration of trust in the web ecosystem?

It's funny that at the engineering level we are continuously grilled in interviews about the single responsibility principle, meanwhile the industry's business model is to undermine the entirety of web standards and consolidate the web stack into a CLI.

Comment by isodev 1 day ago

Coming from a company that makes infrastructure out of a view layer / vDOM library - I think anyone relying on Vercel has only themselves to blame.

Comment by intrasight 21 hours ago

That and also the CEO posting a photo of him posing with a war criminal. Using let alone relying on Vercel baffles me.

Comment by moralestapia 21 hours ago

Which one? There's so many these days I've lost track.

Comment by jongjong 21 hours ago

It feels like the political forces underpinning the software industry are coming to light but it seems like there are two opposing forces now instead of just one.

Comment by brianmcnulty 19 hours ago

It's interesting that Next is becoming so popular when LLMs supposedly have a capability to work with all these other frameworks that don't create a dependency on something like Vercel.

Comment by nnurmanov 1 day ago

You have no idea how indifferent security officers can be-even when you point out critical issues. The other day, we flagged that a customer’s database had users with excessive privileges. Their only question: “Can this be exploited from the outside?”

No, but most breaches today come from compromised internal accounts that are then used to break everything.

Comment by Foobar8568 1 day ago

What's the problem to have local API connected in HTTP? We are within the enterprise network.

And that's how I passed for a annoying "PM". With half of the program management complaining that I was slowing down things until 6m later, the head of risk management told them to get lost.

Comment by ethbr1 23 hours ago

> the head of risk management told them to get lost

That's why it's important to org-chart engineer for security, if a company is really serious.

Comment by james_marks 22 hours ago

The answer is Yes, this can be exploited from the outside by taking over dev machines and using their access.

If you answer No and complain that it’s not taken seriously, it’s at least in part because you didn’t show the risk clearly.

Comment by mar_makarov 3 hours ago

maybe a dumb idea , but maybe using some kind of one time token access would resolve ? some physical keycard would guarantee this to not happen at all right ?

Comment by anal_reactor 1 day ago

The problem with security is that often it's cheaper to deal with the bad outcome than to prevent it. Actually getting security right is very expensive because it requires virtually every engineer to have some security awareness, and engineers who can be trusted with that tend to be difficult to find. Meanwhile if you have a security incident you say "sorry", maybe you pay a small fine, and a month later everyone had already moved on.

Comment by cogogo 1 day ago

This misalignment is especially bad at startups. In my experience security is only prioritized when driven by the customer and is largely a performative box checking exercise.

Comment by piyh 1 day ago

JavaScript living only as a built artifact in an s3 bucket makes for a much simpler life.

Comment by zbycz 22 hours ago

until someone starts a botnet making your S3 invoice to $10k. Pay per usage is always a liability.

It is horrendous that aws doesnt allow any usage limits.

Comment by neya 1 day ago

Polite reminder as to why Domain Driven Design is super-important. It makes more sense to spend 80% on DDD initially and then only 20% on the code (80-20 rule) vs the other way round. Or you will end up in a clusterfuck like this.

Comment by igleria 1 day ago

Domain Driven Design is something that I have only come to know with full understanding at my current job and oh my it is useful. It's not a silver bullet, but for complex domains it's a must.

Comment by lofaszvanitt 1 day ago

The whole hiring system needs to be eradicated. You get grilled by incompetents, who ask one question, never ask back when you provide something that is debatable, they give zero feedback and then you see what kind of errors these "elitist" engineers make. Burn it to the ground.

Comment by Neikius 1 day ago

Best hiring systems I saw were when actual engineers hiring for their team were doing the bulk. You get a gauge of what you can expect and them too.

Comment by vicchenai 1 day ago

three critical vulns in 12 months is a pattern not a coincidence. the SRP point is sharp - we interview engineers on isolation principles then build platforms that are the opposite of that.

Comment by alfiedotwtf 20 hours ago

GitHub looking awful quiet in the corner on the room there

Comment by agent-kay 1 day ago

[flagged]

Comment by nikcub 1 day ago

Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents

[0] https://amplifying.ai/research/claude-code-picks/report

Comment by operatingthetan 1 day ago

It's interesting how many of the low-effort vibecoded projects I see posted on reddit are on vercel. It's basically the default.

Comment by Aurornis 1 day ago

Reddit vibecoded LLM posts are kind of fascinating for how homogenous they are. The number of vibe coded half-finished projects posted to common subreddits daily is crazy high.

It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?

Comment by derefr 1 day ago

I find that often the developers of these apps don't speak English, but want to target an English-speaking audience. For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.

Comment by ern 1 day ago

I think they simply just haven't figured out that the barrier to entry is so low, that no one really cares what their app can do, even if does something genuinely useful.

Comment by thaumasiotes 1 day ago

> For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.

What do you see as the distinction between "translating" and "paraphrasing"? All translations are necessarily paraphrased.

Comment by d1sxeyes 1 day ago

While that’s true, translations often vary in terms of how faithful they are to the source vs how idiomatic they are in the target language. Take for example the French phrase “j’ai fait une nuit blanche”, which literally means “I did a white night”. Clearly that’s a bad translation. A more natural translation might be “I pulled an all-nighter”.

Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.

Those are fairly simple because they have neat English translations, but what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”, but is really a way of telling someone it’s none of their business. You could translate it as “it’s none of your business”, or “keep your nose out” or “stay in your lane” or lots and lots of other versions, with varying levels of paraphrasing, which depend on context you can’t necessarily read purely from the words themselves.

Comment by thaumasiotes 22 hours ago

I'll preface this by noting that I don't disagree with anything you've said, but I do have some comments:

> Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.

Those obvious mappings can sometimes be too seductive for the translator's good. One example is that people translating English-loanwords-in-a-foreign-language into English usually can't help but translate them as the original English word.

Another example is that, in China, there is a cultural concept of a 狐狸精, which you might translate as "fox spirit". (The "fox" part of the translation is straightforward, but 精 is a term for a supernatural phenomenon, and those are difficult to translate.) They can do all kinds of things, but one especially well-known behavior is that they may take the form of human women and seduce (actual) human men. This may or may not be harmful to the man.

Because of this concept, the word also has a sense in which it may be used to insult a (normal) woman, accusing her of using her sex appeal toward harmful ends.

Chinese people translating this into English almost always use the word "vixen", which is, to be fair, a word that may refer to a sexy human woman or to a female fox. But I really don't feel that they're equivalent, or even that they have much overlap. (Unlike the situation with English loanwords, I think native speakers of Chinese are much more likely to choose this translation than native speakers of English are.)

> what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”

The form closest in structure to that would probably be "none of your beeswax", which is just a minorly altered version of "none of your business". I assume the substitution of "beeswax" is humorous and based on phonetic similarity.

As you note, there are multiple dimensions relevant to translating this and several positions you could take along each. For this particular idea, I would say the two most important dimensions are playfulness and rudeness; it's a very common idea and the language is rich in options for both.

> translations often vary in terms of how faithful they are to the source vs how idiomatic they are in the target language. Take for example the French phrase “j’ai fait une nuit blanche”, which literally means “I did a white night”. Clearly that’s a bad translation. A more natural translation might be “I pulled an all-nighter”.

This isn't what I had in mind. Here are some idiomatic translations:

I pulled an all-nighter.

I was up all night.

I didn't get any sleep.

I never got to bed.

I've been up since [something appropriate to the context].

[Something appropriate to the context] kept me up all night.

I wouldn't call any of the first four "more paraphrased" than the others. (The last two might be, if they included extra information.) If these were reports of the English speech of some other person, one of them (or less) would be a quote, and the others would be paraphrases. But as a report of French speech, they're all paraphrases. The first shares a little more grammatical structure with the French, which doesn't really mean much.

For a fairly similar example from my personal life, someone said to me 这是我第一次听说, and my spontaneous translation of it was "I've never heard that before", despite the fact that there is technically a perfectly valid English expression "this is the first I've heard of that".

What's closer to the grammatical structure of the Chinese? That's hard for me to say. You could analyze 我 as the subject of 听说, and I lean toward that analysis, but my instincts for Mandarin are weak. You might see 我 as being more strongly attached to 第一次, meaning something more like "my first time (to hear ...)" than "I hear (for the first time) ...".

But for whatever it's worth, a word by word literal gloss would be "this is me first time hear".

Between languages with less historical interaction than English and French, it's quite possible that a syntax-preserving translation of some sentence just doesn't exist.

Comment by politelemon 1 day ago

They are not exclusive to reddit. HN has also been full of vibe submissions of the same nature.

Comment by cyral 1 day ago

It's insane how most of the dev subreddits are filled with slop like this. I've thought the same thing - why can't they even spend 5 minutes to write their own post about their project?

Comment by aquariusDue 1 day ago

Yeah, in the last 6 to 10 months /r/rust has become littered with this stuff. There's still some good discussion going on but now I have to sort through garbage. The signal to noise ratio is out of whack these days that I generally avoid platforms like Substack, Medium and so on too.

Comment by 20 hours ago

Comment by 20 hours ago

Comment by 1 day ago

Comment by fantasizr 1 day ago

next, vercel, and supabase is basically the foundation of every vibecoded project by mere suggestion.

Comment by 1 day ago

Comment by jongjong 1 day ago

If this kind of vulnerability exists at the platform level, imagine how vulnerable all the vibe-coded apps are to this kind of exploit.

I don't doubt the competence of the Vercel team actually and that's the point. Imagine if this happens to a top company which has their pick of the best engineers, on a global scale.

My experience with modern startups is that they're essentially all vulnerable to hacks. They just don't have the time to actually verify their infra.

Also, almost all apps are over-engineered. It's impossibly difficult to secure an app with hundreds of thousands of lines of code and 20 or so engineers working on the backend code in parallel.

Some people are like "Why they didn't encrypt all this?" This is a naive way to think about it. The platform has to decrypt the tokens at some point in order to use them. The best we can do is store the tokens and roll them over frequently.

If you make the authentication system too complex, with too many layers of defense, you create a situation where users will struggle to access their own accounts... And you only get marginal security benefits anyway. Some might argue the complexity creates other kinds of vulnerabilities.

Comment by fantasizr 21 hours ago

the vibe coders don't know what they don't know so whatever code is written on their behalf better be up to best practices (it isn't)

Comment by MrDarcy 1 day ago

They’re all shit too. All three decided to do custom auth instead of OIDC and it’s a nightmare to integrate with any of them.

Comment by 00deadbeef 1 day ago

Maybe that's why all these vibe coded slop apps also use Clerk for auth alongside Supabase etc

Comment by gbgarbeb 1 day ago

10 years ago it was Heroku and Three.js.

Comment by seattle_spring 1 day ago

10 years ago it was Heroku and Ruby on Rails*

Comment by dzonga 1 day ago

but now Ruby on Rails is not a circus like how Next.js is.

see [0]: Rails security Audit Report

[0]: https://ostif.org/ruby-on-rails-audit-complete/

Comment by bdcravens 1 day ago

More like 15. By 2016, Rails was supposedly dead and we were all going to be running the same code on the front end and back end in a full stack, MongoDB euphoria.

Comment by boringg 1 day ago

New one coming in 5 years. Cycle repeats itself.

Comment by guelo 1 day ago

I don't think so, AIs are going to freeze the tooling to what we have today since that's what's in the training corpus, and it's self reinforcing.

Comment by telotortium 1 day ago

Nah, the good LLMs can generally web search and read documentation well enough that the fact that pre-training isn’t up to the minute is not a serious concern. Badly-documented projects are more of a concern, but they weren’t likely to get much pre-AI usage either.

Comment by ern 1 day ago

I've done a ton of low-effort vibe-coded projects that suit my exact use cases. In many cases, I might do a quick Google search, not find an exact match, or find some bloated adware or subscription-ware and not bother going any further.

Claude Code can produce exactly what I want, quickly.

The difference is that I don't really share my projects. People who share them probably haven't realized that code has become cheap, and no one really needs/wants to see them since they can just roll their own.

Comment by lionkor 1 day ago

The kind of code, with the kind of quality, that LLMs can output has become cheap. Learning has not, and neither has genuinely well designed, human designed, code. This might be surprising to the majority of users on HN, but once a really good programmer joins your team, who is both really good, and also uses LLMs to speed up the parts that he or she isn't good at, you really learn how far away vibe coders are from producing something worth using.

Comment by michaelbuckbee 22 hours ago

There's a push and pull here, Typescript + React + Vercel are also very amenable to LLM driven development due to a mix of the popularity of examples in the LLMs dataset, how cheap the deployment is and how quick the ecosystem is to get going.

Comment by echelon 1 day ago

Another Anthropic revenue stream:

Protection money from Vercel.

"Pay us 10% of revenue or we switch to generating Netlify code."

Comment by JLO64 1 day ago

Wouldn’t Vercel still make money in that scenario since Netlify uses them?

Comment by slopinthebag 1 day ago

Netlify uses AWS (and Cloudflare? Vercel def uses Cloudflare)

Comment by serhalp 1 day ago

Netlify and Vercel both use AWS. AFAIK neither uses Cloudflare. Vercel did use Cloudflare for parts of its infra until about a year ago though.

Comment by slopinthebag 1 day ago

Ah, ok. I knew they did use Cloudflare but had no idea they migrated off of it.

Comment by brazukadev 13 hours ago

Cloudflare CEO treated Vercel CEO too badly in public, he needed to migrate off to save face.

Comment by arcfour 1 day ago

Vercel runs on AWS.

Comment by aitchnyu 1 day ago

Which PaaS are running on their own servers and earning a profit?

Comment by neilv 1 day ago

The other day, I was forcing myself to use Claude Code for a new CRUD React app[1], and by default it excreted a pile of Node JS and NPM dependencies.

So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.

[1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding

Comment by BigTTYGothGF 1 day ago

> forcing myself to use Claude Code

You don't have to live like this.

Comment by neilv 1 day ago

Even though I'm a hardcore programmer and software engineer, I still need to at least keep aware of the latest vibe coding stuff, so I know what's good and bad about it.

Comment by t0mas88 1 day ago

You can tell Claude to use something highly structured like Spring Boot / Java. It's a bit more verbose in code, but the documentation is very good which makes Claude use it well. And the strict nature of Java is nice in keeping Claude on track and finding bugs early.

I've heard others had similar results with .NET/C#

Comment by lmm 1 day ago

Spring Boot is every bit as random mystery meat as Vercel or Rails. If you want explicit then use non-Boot Spring or even no Spring at all.

Comment by MrDarcy 1 day ago

Same for Go.

Comment by TeMPOraL 1 day ago

My vibe coded one-off app projects have are all, by default, "self-contained single file static client side webapp, no build step, no React or other webshit nonsense" in their prompt. For more complex cases, I drop the "single file". Works like a charm.

Comment by desecratedbody 1 day ago

You wanted it to use React but not node? Am I missing something here?

Comment by jazzypants 21 hours ago

You can use React without Node by using a CDN. You can even use JSX if you use Babel in a script tag. It's just inefficient and stupid as hell.

Comment by siva7 1 day ago

I'm struggling to understand how they bought Bun but their own Ai Models are more fixated in writing python for everything than even the models of their competitor who bought the actual Python ecosystem (OAI with uv)

Comment by echelon 1 day ago

It emits Actix and Axum extremely well with solid support for fully AOT type checked Sqlx.

Switch to vibe coding Rust backends and freeze your supply chain.

Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.

Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.

Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.

Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.

Comment by neilv 1 day ago

How are you getting low dependencies for Web backend with Rust? (All my manually-written Rust programs that use crates at all end up pulling in a large pile of transitive dependencies.)

Comment by jazzypants 21 hours ago

Cargo is just as vulnerable as NPM. It's just a smaller, more difficult target.

Comment by slopinthebag 1 day ago

Ok I mean this is a little crazy, "minimal dependencies" and Rust? Brother I need dependencies to write async traits without tearing my hair out.

But you're also correct in that Rust is actually possible to write in a more high level way, especially for web where you have very little shared state and the state that is shared can just be wrapped in Arc<> and put in the web frameworks context. It's actually dead easy to spin up web services in Rust, and they have a great set of ORM's if thats your vibe too. Rust is expressive enough to make schema-as-code work well.

On the dependencies, if you're concerned about the possibility of future supply chain attacks (because Rust doesn't have a history like Node) you can vendor your deps and bypass future problems. `cargo vendor` and you're done, Node has no such ergonomic path to vendoring, which imo is a better solution than anything else besides maybe Go (another great option for web services!). Saying "don't use deps" doesn't work for any other language other than something like Go (and you can run `go vendor` as well).

But yeah, in today's economy where compute and especially memory is becoming more constrained thanks to AI, I really like the peace of mind knowing my unoptimised high level Rust web services run with minimal memory and compute requirements, and further optimisation doesn't require a rewrite to a different language.

Idk mate, I used to be a big Rust hater but once I gave the language a serious try I find it more pleasant to write compared to both Typescript and Go. And it's very amiable to AI if that's your vibe(coding), since the static guarantees of the type system make it easier for AI to generate correct code, and the diagnostics messages allow it to reroute it's course during the session.

Comment by OptionOfT 1 day ago

Except with using Rust like this you're using it like C#. You don't get to enjoy the type system to express your invariants.

Comment by Imustaskforhelp 1 day ago

> Python

I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.

I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library

(Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)

Comment by nightski 1 day ago

It's a good point, but I don't think the problem here is Claude. It's how you use it. We need to be guiding developers to not let Claude make decisions for them. It can help guide decisions, but ultimately one must perform the critical thinking to make sure it is the right choice. This is no different than working with any other teammate for that matter.

Comment by gommm 1 day ago

That's not helped by a recent change to their system prompt "acting_vs_clarifying":

> When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn’t there).

> When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person’s location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.

> Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]

In my experience before this change. Claude would stop, give me a few options and 70% of the time I would give it an unlisted option that was better. It actually would genuinely identify parts of the specs that were ambiguous and needed to be better defined. With the new change, Claude plows ahead making a stupid decision and the result is much worse for it.

Comment by dennisy 1 day ago

I think most people would agree.

However it is less clear on how to do this, people mostly take the easiest path.

Comment by fintler 1 day ago

Its an eternal september moment.

https://en.wikipedia.org/wiki/Eternal_September

Comment by userbinator 1 day ago

Eternal Sloptember

Comment by 1 day ago

Comment by operatingthetan 1 day ago

I guess engineers can differentiate their vibecoded projects by selecting an eccentric stack.

Comment by alex7o 1 day ago

Choosing an eccentric stack makes the llms do better even. Like Effect.ts or Elixir

Comment by rpcope1 1 day ago

I actually noticed the same. Having it work on Mithril.js instead of React seems (I know it's all just kind of hearsay) to generate a lot cleaner code. Maybe it's just because I know and like Mithril better, but also is likely because of the project ethos and it's being used by people who really want to use Mithril in the wild. I've seen the same for other slightly more exotic stacks like bottle vs flask, and telling it to generate Scala or Erlang.

Comment by fragmede 1 day ago

That makes sense. There's less training data but it is better training data. LLMs were trained on really bad pandas code, so they're really really good at generating bad pandas. Elixer, there's less of it, but what there is, is higher quality, so then what it outputs is off higher quality too.

Comment by gommm 1 day ago

That's been my experience as well. Claude code does better with Elixir (plus I enjoy working on the code better after :) )

Comment by egeozcan 1 day ago

> a. Actually do something sane but it will eat your session

> b. (Recommended) Do something that works now, you can always make it better later

Comment by duped 1 day ago

No, the problem is the people building and selling these tools. They are marketed as a way of outsourcing thinking.

Comment by dennisy 1 day ago

So what are you suggesting do not allow companies to sell such tools?

Comment by duped 1 day ago

I'm suggesting people shouldn't lie to sell things because their customers will believe them and this causes measurable harm to society.

Comment by liveoneggs 1 day ago

AI does outsource thinking. It is not a lie.

Comment by hansmayer 1 day ago

If you don't tend to think much in the first place or have low expectations, then yes

Comment by duped 1 day ago

I think if you believe that you're either lying or experiencing psychosis. LLMs are the greatest innovation in information retrieval since PageRank but they are not capable of thought anymore than PageRank is.

Comment by pastel8739 1 day ago

Shouldn’t Claude just refuse to make decisions, then, if it is problematic for it to do so? We’re talking about a trillion dollar company here, not a new grad with stars in their eyes

Comment by lionkor 1 day ago

It's just an LLM.

Comment by neal_jones 1 day ago

The thing I can’t stop thinking about is that Ai is accelerating convergence to the mean (I may be misusing that)

The internet does that but it feels different with this

Comment by themafia 1 day ago

> convergence to the mean

That's a funny way of saying "race to the bottom."

> The internet does that but it feels different with this

How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?

Comment by walthamstow 1 day ago

I'd call it race to the median, converging to mediocrity, or what the kids would call "mid"

Comment by slashdave 1 day ago

> How does "the internet do that?"

Stack exchange. Google.

Comment by themafia 1 day ago

Please explain how these cause a "convergence to the mean."

Comment by antonvs 1 day ago

I assume they’re saying that the most common and popular solutions propagate, power-law style. LLMs just amplify that loop.

Comment by mentalgear 1 day ago

Indeed 'race to the bottom' seems more like capitalism in general.

Comment by neither_color 23 hours ago

This is why Im glad I learned to code before vibecoding. I tell codex exactly what tools and platforms to use instead of letting it default to whatever is the most popular, and I guard my .env and api keys carefully. I still build things page by page or feature by feature instead of attempting to one shot everything. This should be vibe-coding 101.

Comment by ethbr1 23 hours ago

$ Good idea! Let's add a Redis cache to that!

Comment by deaux 1 day ago

That report greatly overrates the tendency to default for Vercel for web because among its 2 web projects it mandated one use Next.js and the other one to be a React SPA as well. Obviously those prime Claude towards Vercel. They shouldve had the second project be a non-React web project for diversity.

Comment by betocmn 1 day ago

Yeah, I’ve been tracking what devtools different models choose: https://preseason.ai

Comment by lmm 1 day ago

Is that bad? I would think having everyone on the same handful of platforms should make securing them easier (and means those platforms have more budget to to so), and with fewer but bigger incidents there's a safety-of-the-herd aspect - you're unlikely to be the juiciest target on Vercel during the vulnerability window, whereas if the world is scattered across dozens or hundreds of providers that's less so.

Comment by leduyquang753 1 day ago

When everyone uses the same handful of platforms, then everyone becomes the indirect target and victim of those big incidents. The recent AWS and Cloudflare outages are vivid examples. And then the owners of those platforms target everyone with their enshittification as well to milk more and more money.

Comment by elric 1 day ago

Interstingly, a recent conversation [1] between Hank Green and security researcher Sherri Davidoff argued the opposite. More GenAI generated code targeted at specific audiences should result in a more resilient ecosystem because of greater diversity. That obviously can't work if they end up using the same 3 frameworks in every application.

[1] https://www.youtube.com/watch?v=V6pgZKVcKpw

Comment by habinero 1 day ago

I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?

Comment by kay_o 1 day ago

> start building the same generated code with the same vulnerability

This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback

I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS

So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.

Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"

Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.

The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol

Comment by sans_souse 1 day ago

As a general hobbyist-type, I can attest to the above post, it is 100% valid and accurate.

This entire process is something anyone can test and reproduce; I was definitely steered towards both vercel and supabase by gemini. It isn't model specific.

Comment by habinero 1 day ago

> The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

Ahhhhhhhgh. If I ever make that cybersecurity house of horrors, that's going in it

Comment by slashdave 1 day ago

I'm not against making agents scapegoats, but this is a problem found among humans as well.

Comment by jongjong 1 day ago

Yes, this is a genuine problem with AI platforms. It does sometimes feel like they're suspiciously over-promoting certain solutions; to the point that it's not in the AI platform's interest.

I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.

It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.

It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.

Comment by lelanthran 1 day ago

> It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.

Comment by mvkel 1 day ago

That's only looking at half of the equation.

That lack of diversity also makes patches more universal, and the surface area more limited.

Comment by btown 1 day ago

"Nobody ever got fired for putting their band page on MySpace."

Comment by stefan_ 1 day ago

It's so trivial to seed. LLMs are basically the idiots that have fallen for all the SEO slop on Google. Did some travel planning earlier and it was telling me all about extra insurances I need and why my normal insurance doesn't cover X or Y (it does of course).

Comment by andersmurphy 1 day ago

That's the irony of Mythos. It doesn't need to exist. LLM vibe slop has already eroded the security of your average site.

Comment by egeozcan 1 day ago

Self fulfilling prophecy: You don't need to secure anything because it doesn't make a difference, as Mythos is not just a delicious Greek beer, but also a super-intelligent system that will penetrate any of your cyber-defenses anyway.

Comment by andersmurphy 1 day ago

In some ways Mythos (like many AI things) can be used as the ultimate accountability sink.

These libraries/frameworks are not insecure because of bad design and dependency bloat. No! It's because a mythical LLM is so powerful that it's impossible to defend against! There was nothing that could be done.

Comment by antonvs 1 day ago

Mythos is the new DDoS or “state-level actors”.

Comment by Something1234 1 day ago

Explain more about this beer.

Comment by egeozcan 1 day ago

https://en.wikipedia.org/wiki/Mythos_Beer

I really like it. Recommended.

Comment by wonnage 1 day ago

Conspiracy theory: they intentionally seeded the world with millions of slop PRs and now they’re “catching bugs” with Mythos

Comment by nettlin 1 day ago

They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

Comment by ryanscio 1 day ago

https://x.com/rauchg/status/2045995362499076169

> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.

> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.

> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Still no email blast from Vercel alerting users, which is concerning.

Comment by _pdp_ 1 day ago

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.

Blame it on AI ... trust me... it would have never happened if it wasn't for AI.

Comment by gherkinnn 1 day ago

> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI.

Reads like the script of a hacker scene in CSI. "Quick, their mainframe is adapting faster than I can hack it. They must have a backdoor using AI gifs. Bleep bleep".

Comment by cowsup 1 day ago

> Still no email blast from Vercel alerting users, which is concerning.

On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.

But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.

Comment by gk1 1 day ago

> On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams

This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.

And for this company specifically, Guillermo is not one to defer to comms or legal.

Comment by brobdingnagians 1 day ago

If he's not one to defer to Comms or legal, maybe this one is so bad that he's acting differently then he normally would

Comment by huflungdung 1 day ago

[dead]

Comment by loloquwowndueo 1 day ago

> the CEO can't just write a mass email without approval from legal or other comms teams.

They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.

Comment by eclipticplane 1 day ago

Has anyone actually gotten an email from Vercel confirming their secrets were accessed? Right now we're all operating under the hope (?) that since we haven't (yet?) gotten an email, we're not completely hosed.

Comment by loloquwowndueo 1 day ago

Hope-based security should not be a thing. Did you rotate your secrets? Did you audit your platform for weird access patterns? Don’t sit waiting for that vercel email.

Comment by eclipticplane 1 day ago

Of course rotated. But we don't even know when the secrets were stolen vs we were told, so we're missing a ton of info needed to _fully_ triage.

Comment by lelanthran 1 day ago

> Did you rotate your secrets?

For most secrets they are under your control so, sure, go ahead and rotate them, allowing the old version to continue being used in parallel with the new version for 30 minutes or so.

For other secrets, rotation involves getting a new secret from some upstream provider and having some services (users of that secret) fail while the secret they have in cache expires.

For example, if your secret is a Stripe key; generating a new key should invalidate the old one (not too sure, I don't use Stripe), at which point the services with the cached secret will fail until the expiry.

Comment by ItsClo688 1 day ago

nope...I feel u, the "Hope-based security" is exactly what Vercel is forcing on its users right now by prioritizing social media over direct notification.

If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.

Comment by steve1977 1 day ago

> the CEO can't just write a mass email without approval from legal or other comms teams

Wouldn't the CEO be... you know... the chief executive?

Comment by hvb2 1 day ago

Sure, and the reason he is is because he DOES check stuff like this before sending it out.

Top leaders excel because they assemble a team around them they trust. You can't do everything yourself, you need to delegate. And having people in those positions also means you shouldn't be acting alone or those people will not stick around

Comment by steve1977 1 day ago

I disagree. In a crisis, a leader should take the lead and make decisions. If he/she is not able to that on their own, they are in the wrong place.

Now I will agree that there are many executives like the ones you describe. But they are not top leaders.

Comment by scott_w 1 day ago

So you’re telling me a CEO must also be a practicing lawyer? Because any other option is how you guarantee your company gets sued into oblivion.

Comment by steve1977 1 day ago

First of all, I would expect a top leader to be prepared for scenarios like this (including templates of customer communication).

And yeah, I would expect a CEO to have enough legal knowledge to handle such a situation (customer communication) on his own.

But I also have to mentioned that I'm not in the US. Not every country has the litigation system of the US where you can basically destroy a company because you as the customer are too dumb to not spill hot coffee over yourself.

Comment by arvyy 1 day ago

> you as the customer are too dumb to not spill hot coffee over yourself

presuming you're referring to the hot coffee lawsuit, maybe read details of the story. McDonalds wasn't at all blameless, and the plaintiff had reasonable demands

Comment by scott_w 1 day ago

You expect the CEO of a company to have the legal depth of knowledge AND knowledge of all their customers, contracts and SLAs to be able to wing a communication and not somehow trip over all of that? They also should understand every possible legal jurisdiction that could be affected? You realise even the head of their legal department (a HIGHLY competent lawyer) likely wouldn’t say there could do that without speaking to the key people in their team?

Should the CEO also bang out some dev estimates for the roadmap because, hey, they should be competent enough to do something like that. Why not submit the accounts for the year? How hard can it be, just reading a few lines off their Sage or Quickbooks accounts?

Comment by scott_w 1 day ago

Let me be more clear on what I mean by “wing it,” because “having templates” doesn’t really cut it. Anyone can bang out a “we have a problem” template, so why does the CEO need to attach their name to it? Once you’re at the point of needing a CEO to communicate, you have a specific problem, with its own specific impacts that a single person can not be expected to have enough depth of knowledge in their brain to actually talk about without involving their domain experts, including legal, technical, whatever the situation needs.

Comment by Orygin 22 hours ago

> can not be expected to have enough depth of knowledge in their brain to actually talk about

What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?

Like what? Poor little CEO that doesn't understand anything about the world and how to run a company. Seems like helplessness is expected at every stage.

Comment by scott_w 22 hours ago

> What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?

Bit of a difference between “having depth of knowledge in their business” and “can speak off-the-cuff with the necessary accuracy to remain in compliance with every contract and legal jurisdiction their organisation is engaged in, without consulting the numerous domain experts they employ for just this purpose,” isn’t there.

Also, such a situation that requires the CEO’s direct attention has already gone FAR beyond your standard incidents where you can throw out a pre written statement. Do you want your organisation just cuffing it from the top down? Are you Elon Musk in disguise?

Comment by Orygin 22 hours ago

What use is a CEO if they can't take the lead in times like this?

If they are unprepared frankly they suck as CEO and should be thrown out. If only competency was a requirement for these jobs...

Comment by scott_w 21 hours ago

That’s not what I said though, is it?

Comment by refulgentis 1 day ago

I'm going down with the ship over on X.com the Everything App. There's a parcel of very important tech people that are running some playbook where posting to X.com is sufficient enough to be unimpeachable on communication, despite its rather beleaguered state and traffic.

Comment by nnurmanov 1 day ago

Usually, companies have procedures for such events. But most do not.

Comment by gnabgib 1 day ago

Usually have procedures, but most don't? Say again

Comment by wombatpm 1 day ago

The disaster plan says there is a process, but it has never been used and is probably outdated. Chances are the social media strategy requires posting on the Facebook and updating key Circles on Google+

Comment by ptx 1 day ago

> an AI platform customer called http://Context.ai that he was using

Hmm? Who is the customer in this relationship? Is Vercel using a service provided by Context.ai which is hosted on Vercel?

Comment by pier25 20 hours ago

Surprising velocity? It appears the hackers had the oauth key for a month.

Comment by UltraSane 1 day ago

Production network control plane must be completely isolated from the internet with a separate computer for each. The design I like best is admins have dedicated admin workstations that only ever connect to the admin network, corporate workstations, and you only ever connect to the internet from ephemeral VMs connected via RDP or similar protocol.

Comment by loloquwowndueo 1 day ago

The actual app name would be good to have. Understandable they don’t want to throw them under the bus but it’s just delaying taking action by not revealing what app/service this was.

Comment by progbits 1 day ago

I was trying to look it up (basically https://developers.google.com/identity/protocols/oauth2/java... -- the consent screen shows the app name) but it now says "Error 401: invalid_client; The OAuth client was not found." so it was probably deleted by the oauth client owner.

Comment by tom1337 1 day ago

Comment by 1 day ago

Comment by loloquwowndueo 1 day ago

Makes it even more relevant to have the actual app or vendor name - who’s to say they just removed it to save face and won’t add it later?

Comment by pottertheotter 1 day ago

Comment by tom1337 1 day ago

[dead]

Comment by cebert 1 day ago

I don’t understand why they can’t just directly name the responsible app as it will come out eventually.

Comment by pottertheotter 1 day ago

Comment by 1 day ago

Comment by sroussey 1 day ago

Which itself was the subject of a broader compromise as far as i can tell

Comment by SaltyBackendGuy 1 day ago

Maybe legal red tape?

Comment by brookst 1 day ago

Yes. The oauth ID is indisputable. It it seems to be context.ai. But suppose it was a fake context.ai that the employee was tricked into using. Or… or…

Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.

Comment by mcdow 1 day ago

They might be buying time to sell the relevant stock

Comment by newdee 1 day ago

It looks like the app has already been deleted

Comment by slopinthebag 1 day ago

Idk exactly how to articulate my thoughts here, perhaps someone can chime in and help.

This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.

Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?

Comment by lijok 1 day ago

This isn't a web development concept. It's the unix philosophy of "write programs that do one thing and do it well" and interconnect them, being taken to the extremes that were never intended.

We need a different hosting model.

Comment by pianopatrick 1 day ago

Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.

Comment by mpyne 1 day ago

The Unix model wasn't simply do one thing and do it well.

It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl.

But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains.

This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod.

But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack.

After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk.

But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks.

Comment by lelanthran 1 day ago

> Just throwing it out there - the Unix way to write software is often revered. But ideas about how to write software that came from the 1970s at Bell Labs might not be the best ideas for writing software for the modern web.

GP said it's about taking the Unix philosophy to extremes, you say something different.

Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought".

Comment by uecker 1 day ago

I do not see what this has to do with Unix. The problem is not that programs interoperate or handle text streams, the problem is a) the supply chain issues in modern web-software (and thanks to Rust now system-level) development and b) that web applications do not run under user permissions but work for the user using token-based authentication schemes.

Comment by 1 day ago

Comment by steve1977 1 day ago

I guess we failed at the "do it well" step.

Comment by esseph 1 day ago

> We need a different hosting model.

There really isn't an option here, IMO.

1. Somebody does it

2. You do it

Much happier doing it myself tbh.

Comment by fragmede 1 day ago

There's a lot of wiggle room on how you define "it". At the ends of the spectrum it's obvious, but in the middle it gets a bit sticky.

Comment by 0xbadcafebee 1 day ago

It's not a hosting model, it's a fundamental failure of software design and systems engineering/architecture.

Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.

When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.

Comment by slopinthebag 1 day ago

In my mind the unix philosophy leads to running your cloud on your own hardware or VPS's, not this.

Comment by bdangubic 1 day ago

exactly this, write - not use some sh*t written by some dude from Akron OH 2 years ago”

Comment by arcfour 1 day ago

That's why I wrote my own compiler and coreutils. Can't trust some shit written by GNU developers 30 years ago.

And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.

And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.

Comment by slopinthebag 1 day ago

Yeah definitely no difference between GNU coreutils and some vibe coded AI tool released last month that wants full oAuth permissions.

Comment by eddythompson80 1 day ago

I’m not joking, but weirdly enough, that’s what most AI arguments boil down to. Show me what the difference is while I pull up the endless CVE list of which ever coreutils package you had in mind. It’s a frustrating argument because you know that authors of coreutils-like packages had intentionality in their work, while an LLM has no such thing. Yet at the end, security vulnerabilities are abundant in both.

The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.

Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.

Comment by uecker 1 day ago

"endless list of CVE" seems rather exaggerated for coreutils. There are only very few CVEs in the last decade and most seem rather harmless.

Comment by rzzzt 1 day ago

Now I'd genuinely like to know whether "yes" had a CVE assigned, not sure how to search for it though...

Comment by arcfour 1 day ago

I wouldn't describe myself as an AI maximalist at all. I just don't believe the false dichotomy of you either produce "vulnerable vibe coded AI slop running on a managed service" or "pure handcrafted code running on a self hosted service."

You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.

And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.

Comment by eddythompson80 1 day ago

I used to believe that too, yet the dichotomy is what’s being pushed by what I called an “AI maximalist” and it’s what I was pushing against.

There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”

Comment by slopinthebag 1 day ago

It's such a bad faith argument, they basically make false equivalencies with LLMs and other software. Same with the "AI is just a higher level compiler" argument. The "just" is doing a ton of heavy lifting in those arguments.

Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].

In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:

- Make it easy to write, test, and run programs.

- Interactive use instead of batch processing.

- Economy and elegance of design due to size constraints ("salvation through suffering").

- Self-supporting system: all Unix software is maintained under Unix.

In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to

- Write programs that do one thing and do it well.

- Write programs to work together.

- Write programs to handle text streams, because that is a universal interface.

This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.

Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!

0: https://en.wikipedia.org/wiki/Not_even_wrong

Comment by brookst 1 day ago

So it’s not a binary thing, there’s context and nuance?

Comment by arcfour 1 day ago

Embrace the suck.

Comment by steve1977 1 day ago

cue Jeopardy theme song

Who is Apple?

Comment by DASD 1 day ago

TempleOS, is that you?

Comment by hansmayer 1 day ago

[flagged]

Comment by junon 1 day ago

This was a Google oauth app and it was phished. So... No.

Comment by hansmayer 1 day ago

"The incident originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee"

So - yes, actually.

Comment by ivansenic 1 day ago

There are 3 main questions here:

1) Vercel rolled out sensitive secrets on February 1, 2024, why were not all existing env vars transitioned to sensitive type? Why was there any assumption that any secret added as env var before that date was still OK to be left as "non-sensitive".

2) How was actually the Google workspace account was compromised? If context.ai was the originating issue, what actually led to the takeover? Were there too many access privileges given to the Google Workspace token context.ai had, or was there actually a workstation takeover here?

3) And finally why the hack a compromised Google Workspace account lead to someone having access to bunch of customer projects? Were is the connection? I don't get this..

Comment by tetrakai 23 hours ago

I can't comment about 1, but my read of 2 and 3 is that the chain was something like this:

1. One or more Vercel employees - likely engineers - grant OAuth access to context.ai. They presumably did this for office-suite style features, but the OAuth request included a GCP grant for some reason, maybe laziness on context.ai's part or planned future features? Either way, Google's OAuth flow has little differentiation between "office suite" scopes and "cloud platform" scopes, so this may not have been particularly obvious to those at Vercel

2. context.ai's AWS account was compromised (unspecified how), and the Google OAuth tokens they had for customer accounts, including those for at least one Vercel employee, were taken

3. Those OAuth token(s) were used to authenticate to the GCP APIs as those Vercel employees, then allowing access to Vercel's DBs, and therefore access to customer data and secrets

Comment by 20 hours ago

Comment by ethbr1 23 hours ago

Taking this at face value: https://www.infostealers.com/article/breaking-vercel-breach-...

   Context.ai employee searches for Roblox exploits on web
   -> Context.ai support access breached by malware
   -> Vercel privileged employee account who uses Context.ai breached
   -> Vercel customer secrets breached
Tl;dr - insufficient endpoint protection and activity detection at Context.ai (big surprise!) + insufficient privileged account isolation at Vercel

Comment by pier25 20 hours ago

Regarding 1, from another comment it seems NeonDB env vars are not sensitive by default.

https://news.ycombinator.com/item?id=47832692

Comment by toddmorey 1 day ago

I've been part of a response team on a security incident and I really feel for them. However, this initial communication is terrible.

Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?

The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.

The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.

Comment by birdsongs 1 day ago

Seriously. Why am I reading about this here and not via an email? I've been a paying customer for over a year now. My online news aggregator informs me before the actual company itself does?

Comment by shimman 1 day ago

Please remember that this is the same company that couldn't figure out how to authorize 3rd party middleware and had, with what should be a company ending, critical vulnerability .

Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.

This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.

Comment by 1 day ago

Comment by throwanem 1 hour ago

This industry's favored idiot children.

Comment by 1970-01-01 1 day ago

I just deleted my account. Their laid-back notice just is not worth it anymore. I will hold them accountable using my cash. You can get out with me. Let their apologies hit your spam filter. They need to be better prepared to react to the storm of insanity that comes with a breach or they lose my info (lose it twice, I guess..)

Comment by salomonk_mur 1 day ago

Says they emailed affected customers...

Comment by btown 1 day ago

Via the incident page:

> Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.

https://vercel.com/kb/bulletin/vercel-april-2026-security-in... as of 4:22p ET

Comment by aziaziazi 1 day ago

The “sensitive” toggle is off by default. I’m curious about the rationale, what's the benefit of this default for users and/or Vercel?

https://vercel.com/docs/environment-variables/sensitive-envi...

Comment by throw03172019 1 day ago

Simpler for vibe coders.

Comment by aziaziazi 22 hours ago

Ok but it's not the original intent: that default exists since at least 2020: https://web.archive.org/web/20201130022511/https://vercel.co...

Comment by loloquwowndueo 1 day ago

Sensitive environment variables are environment variables whose values are non-readable once created.

So they are harder to introspect and review once set.

It’s probably good practice to put non-secret-material in non-sensitive variables.

(Pure speculation, I’ve never used Vercel)

Comment by _heimdall 1 day ago

I have used Vercel though prefer other hosts.

There are cases where I want env variables to be considered non-secure and fine to be read later, I have one in a current project that defines the email address used as the From address for automated emails for example.

In my opinion the lack of security should be opt-in rather than opt-out though. Meaning it should be considered secure by default with an option to make it readable.

Comment by jtchang 1 day ago

How does the app read the variable if it can't be read after you input it? Or do they mean you can't view it after providing the variable value to the UI?

Comment by ctmnt 1 day ago

They mean the latter. Very unclear how that translates to meaningful security.

Comment by btown 18 hours ago

You could have a meaningful wall between administrative/deployment interface backends and the customer server backends - only the latter get access to services that have the private keys to decrypt the at-rest storage of secure variables, and this may be fully isolated to different control planes. So it becomes write-but-not-read.

But that's just a bare-minimum defense-in-depth. The fact that an attacker was able to access the insecure variables, and likely the names of secure variables, is still horrifying.

Comment by ctmnt 16 hours ago

I agree / hope that’s what they meant. It seems disingenuous, though, to describe it as unreadable, since obviously something has to read it to bake it into the deploy. And given their apparent lack of effective security boundaries in one area, why should we assume that they’ve got the deploy system adequately locked down?

It’s not like I had a ton of trust in them before, but now they’ve lost almost all credibility.

Comment by gherkinnn 1 day ago

Last year Vercel bungled the security response to a vulnerability in Next's middleware. This is nothing new.

https://news.ycombinator.com/item?id=43448723

https://xcancel.com/javasquip/status/1903480443158298994

Comment by tcp_handshaker 1 day ago

Security is hard and there are only three vendors I trust: AWS, Google and IBM ( yes IBM ). Anything else is just asking for trouble.

Comment by esseph 1 day ago

Having worked both public and private, I can agree with this.

Google in particular has been staggeringly good, and don't sleep on IBM when they Actually Care.

Comment by dd_xplore 1 day ago

Oracle too

Comment by gustavus 1 day ago

Oracle? Oracle?

The Oracle that published an announcement that said "we didn't get hacked" when the hackers had private customer info?

The Oracle that does not allow you to do any security testing on their software unless you use one of their approved vendors?

The Oracle that one of my customers uses where they have to turn off the HR portal for 2 weeks before annual performance evaluations because there is no way to prevent people from seeing things?

The only reason Oracle isn't having nightmarish security problems published every other week is because they threaten to sue anyone that does find an issue.

Oracle is a joke in every conceivable way and I despise them on a personal level.

Comment by warmedcookie 1 day ago

I love a good cathartic rant

Comment by 0xmattf 1 day ago

> The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.

This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.

I would never use one of those hosting providers again.

Comment by nightski 1 day ago

Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?

Comment by 0xmattf 1 day ago

Personal projects/MVPs/small projects? Absolutely. For what I'm running, there's no reason to need anything beyond that.

The point is, I used to just throw everything up on a PaaS. Heroku/Render, etc. and pay way more than I needed to, even if I had 0 users, lol.

Comment by lelanthran 1 day ago

> Looking at linode, those prices get you an instance with 1Gb of ram and a mediocre CPU. So you are running all of your applications on that?

I ran a LoB webapp for multiple companies on a similar setup. Turns out 1GB of RAM is insufficient to run even the most trivial Java webapps, like Jenkins, but is more than sufficient for even non-trivial things using Go + PostgreSQL.

Your stack may be slow, not the machine.

Comment by Orygin 22 hours ago

Most of my services run with 1vCPU and 512Mb of ram. You don't need huge specs for most normal applications.

Comment by adhamsalama 1 day ago

For $3.5, Hetzner gives 2 vCPU, 4GB RAM, 40 GB SSD, and 10 TB of bandwidth.

Comment by eatery1234 1 day ago

Pretty oversold iirc, but then again, that's the same for Linode

Comment by normie3000 1 day ago

Do you mean these are shared instances, and the stated resources are not actually available?

Comment by skeeter2020 1 day ago

how much work should the GP do to migrate if Linode is good enough, to potentially save up to $1.50/month (or spend 50 cents more)?

Comment by cleaning 1 day ago

If you're only paying $3-5 on Linode then your level of usage would probably be comfortably at $0 on Vercel.

Comment by arch-choot 1 day ago

Repeating a prior comment I've made about this[0]: I run a rust webserver on a €4 VPS from hetzner that serves 300M (million) requests a day.

From what I can figure out, Vercel charges "$0.60 per million invocations" [1], which would cost me $180 per day.

[0] https://news.ycombinator.com/item?id=47611454 [1] https://vercel.com/docs/functions/usage-and-pricing#invocati...

Comment by mmastrac 21 hours ago

I run a Rust webserver on a literal Pi3 in my basement and I think I managed to bench it up >1000 rps for standard loads. And that includes a bunch of tanvity querying as well.

I suspect I could do 3000+ rps with some tuning and a more modern CPU or hetzner VPS, but there's some fun cachet from running on an old Pi while there's still headroom.

Comment by 0xmattf 1 day ago

It could be $0 on Render too, but then there's going to be a 3 minute load time for a landing page to become visible, lol. So if you don't want your server to sleep, you're going to have to pay $20/month.

Does Vercel do the same?

Comment by somewhatgoated 1 day ago

No, I run several small websites on Vercel for free for years, always served static pages very quickly

Comment by 0xmattf 22 hours ago

Static pages, sure. But what do you do if you want a contact form or something? Yeah, you can use services like formspree, but then you may end up paying $20/month for that alone. Perhaps I'm just ignorant.

Comment by anurag 1 day ago

Render offers free static sites that are served via a CDN and load instantly: https://render.com/docs/static-sites

Comment by 0xmattf 22 hours ago

When I said landing page, I had contact forms and more in mind, not documentation sites.

But that is news to me. Interesting. Although for static sites, I always use Netlify or even GitHub pages.

Comment by cleaning 1 day ago

No.

Comment by 00deadbeef 1 day ago

What if they have an actual back-end with long-running processes and scheduled tasks?

Comment by esseph 1 day ago

Makes sense considering the quality of Vercel's security response and customer communication.

Comment by 1 day ago

Comment by p_stuart82 1 day ago

exactly people paid the premium so somebody else's OAuth screwup wouldn't become their Sunday. and here we are.

Comment by rybosome 1 day ago

Completely agreed. At minimum they should be advising secret rotation.

The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.

Comment by lo1tuma 1 day ago

Yeah, given there insane pricing I think the expectations can be higher. Although I know it is impossible to provide 100% secure system, but if something like that happens, then the communication should at least be better. Don’t wait until you have talked to the lawyers... inform your customers first, ideally without this cooperate BS speak, most vercel customers are probably developers, so they understand that incidents like this can happen, just be transparent about it

Comment by elmo2you 1 day ago

Welcome to the show.

While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.

Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.

Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.

Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.

I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.

/anecdote

Comment by _jab 1 day ago

> Vercel did not specify which of its systems were compromised

I’m no security engineer, but this is flatly unacceptable, right? This feels like Vercel is covering its own ass in favor of helping its customers understand the impact of this incident.

Comment by hyperadvanced 1 day ago

I dunno. If I work on GitHub and I say “obscure subsystem X” has been breached, it’s no more useful than the level of specificity that Vercel has already given (“some customer environments have been compromised”)

Comment by nettlin 1 day ago

They just added more details:

> Indicators of compromise (IOCs)

> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.

> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.

> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

Comment by dev360 1 day ago

I wonder which tool that is

Comment by jtreminio 1 day ago

I'm on a macbook pro, Google Chrome 147.0.7727.56.

Clicking the Vercel logo at the top left of the page hard crashes my Chrome app. Like, immediate crash.

What an interesting bug.

Comment by embedding-shape 1 day ago

Huh, curiously; I'm on Arch Linux, crash happens in Google Chrome (147.0.7727.101) for me too, but not in Firefox (149.0.2) nor even in Chromium (147.0.7727.101).

I find it fun we're all reading a story how Vercel likely is compromised somehow, and managed to reproduce a crash on their webpage, so now we all give it a try. Surely could never backfire :)

Comment by nozzlegear 1 day ago

Works in Safari too. Sounds like a Google Chrome thing.

Comment by LtWorf 1 day ago

A crash can mean corrupt file exploiting bug and executing code…

Comment by sbrother 1 day ago

Following since I just reproduced the crash on my own system (Chrome on Ubuntu)

Comment by LtWorf 1 day ago

I hope you run your browser in a sandbox because you might be compromised now.

Comment by 1 day ago

Comment by bel8 1 day ago

Sadly I coudn't make Chrome crash here. Would be fun.

Chrome Version 147.0.7727.101 (Official Build) (64-bit). Windows 11 Pro.

Video: https://imgur.com/a/pq6P4si

I use uBlock Origin Lite. Maybe it blocks some crash causing script? edit: still no crash when I disabled UBO.

Comment by eclipticplane 1 day ago

Same thing here, 147.0.7727.101, M3 Macbook Air. Immediate crash of all open profile windows, so not even a tab-level crash.

Comment by devld 1 day ago

Reminds me of circa 2021 Chromium bug where opening the dropdown menu on GitHub would crash the entire system on Linux. At some point, it got fixed.

Comment by Malipeddi 1 day ago

Same with Chrome on Windows 11. I opened the vercel home page using the url once after which it stopped crashing when clicking on the logo.

Comment by farnulfo 1 day ago

Same hard crash on Chrome Windows 11

Comment by burnte 1 day ago

I'm running 147.0.7727.57 and this doesn't happen. Macbook Air M5. VERY interesting.

Comment by plexicle 1 day ago

MBP - M4 Max - Chrome 146.0.7680.178.

No crash.

Now I don't want to click that "Finish update" button.

Comment by 152334H 1 day ago

if it does so happen that the crash originates from a browser exploit, you should expect to be more at risk due to the absence of a crash on an older version, not less

Comment by itaintmagic 1 day ago

Do you have a chrome://crashes/ entry ?

Comment by rapfaria 1 day ago

it did add an entry - windows 11, chrome

Comment by eddythompson80 1 day ago

Am I reading this[1] correctly that they basically had that "compromised OAuth token" for a month now and it was only detected now when the attackers posted about it in a forum?

[1] https://context.ai/security-update

Comment by newdee 1 day ago

> Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.

This was an interesting tidbit too. If true, this means that Vercel’s IT/Infosec maybe didn’t bother enabling the allowlist and request/review features for OAuth apps in their Google Workspace.

On top of that, they almost certainly didn’t enable the scope limits for unchecked OAuth apps (e.g limiting it to sign-on/basic profile scopes).

Comment by Maxious 1 day ago

And that they engaged Crowdstrike for incident response... who missed OAuth tokens in the clear?

Comment by eddythompson80 1 day ago

lol, yeah that Crowdstrike part was a funny CYA name drop

Comment by pier25 22 hours ago

A month? If true this is insane.

Comment by MattIPv4 1 day ago

Related: https://news.ycombinator.com/item?id=47824426

https://x.com/theo/status/2045862972342313374

> I have reason to believe this is credible.

https://x.com/theo/status/2045870216555499636

> Env vars marked as sensitive are safe. Ones NOT marked as sensitive should be rolled out of precaution

https://x.com/theo/status/2045871215705747965

> Everything I know about this hack suggests it could happen to any host

https://x.com/DiffeKey/status/2045813085408051670

> Vercel has reportedly been breached by ShinyHunters.

Comment by otterley 1 day ago

Who is this “theo” person and why are multiple people quoting him? He seems to have little to say that’s substantive at this point.

Comment by gordonhart 1 day ago

He’s a tech influencer, probably getting quoted here because he has the biggest reach of people covering this so far.

Comment by Aurornis 1 day ago

He’s a streamer who talks about tech. Previously had a sponsorship relationship with Vercel so is theoretically more well connected than average on the topic. He’s also very divisive because he does a lot of ragebait, grievance reporting, and contrarian takes but famously has blind spots for a few companies and technologies that he’s favored in past videos or been sponsored by. I have friends who watch a lot of his videos but I’ve never been able to get into it.

Comment by MikeNotThePope 1 day ago

Theo Browne is a reasonably well known YouTuber & YC founder.

https://t3.gg/

Comment by 1 day ago

Comment by reactordev 1 day ago

YT tech vlogger

Comment by nothinkjustai 1 day ago

He is a paid Vercel shill (literally, he does sponsored content for them on his YouTube channel)

Comment by djeastm 1 day ago

Not in a few years.

Comment by TiredOfLife 1 day ago

Comment by tom1337 1 day ago

> Ones NOT marked as sensitive should be rolled out of precaution

if it's not marked as sensitive (because it is not sensitive) there is no reason to roll them. if you must roll a insensitive env var it should've been sensitive in the first place, no?

Comment by jackconsidine 1 day ago

There's a difference between sensitive, private and public. If public (i.e. NEXT_PUBLIC_) then yeah likely not a reason to roll. Private keys that aren't explicitly sensitive probably are still sensitive. It doesn't seem to be the default to have things "sensitive" and I can't tell if that's a new classification or has always been there.

I can imagine the reason why an env variable would be sensitive, but need to be re-read at some point. But overwhelmingly it makes sense for the default to be set, and never access again (i.e. Fly env values, GCP secret manager etc)

Comment by swingboy 1 day ago

Is this one of those situations where _a lot_ of customers are affected and the “subset” are just the bigger ones they can’t afford to lose?

Comment by toddmorey 1 day ago

Conjecture, but the wording "limited subset" rarely turns out to be good news. Usually a provider will say "less than 1% of our users" or some specific number when they can to ease concerns. My guess is they don't have the visibility or they don't like the number.

I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.

Comment by loloquwowndueo 1 day ago

“Less than 1% of our users” means 10k affected users if you have 1 million users. 10k victims is a lot! Imagine “air travel is safe, only a subset of 1% of travellers die”

Comment by OsrsNeedsf2P 1 day ago

The lack of details makes me wonder how large this "subset" of users really is

Comment by gib444 1 day ago

I remember working support and being told "always say 'subset' unless you absolutely know it's exactly 100% of customers" lol

Comment by jofzar 1 day ago

Same, there was always very specific wording we had to use unless legal approved an exact number or scope.

Comment by bossyTeacher 1 day ago

The lack of details itself is telling enough. Whatever comes out will be no doubt PR sanitised and some bigger clumps of truth won't make it through the PR process.

Comment by nike-17 1 day ago

Incidents like this are a good reminder of how concentrated our single points of failure have become in the modern web ecosystem. I appreciate the transparency in their disclosure so far, but it definitely makes you re-evaluate the risk profile of leaning entirely on fully managed PaaS solutions.

Comment by saadn92 1 day ago

ha, if anyone is interested, I wrote about how I migrated away from Vercel. good timing: https://saadnaveed.com/writing/vercel-to-hetzner/

Comment by waldopat 14 hours ago

While everyone is revoking OAuth apps, rotating API keys, and deleting Vercel accounts, this is a good reminder that the scary part is how short the path was from OAuth token to employee account to internal systems to customer secrets.

Many folks here likely have some stack that looks like: Google Workspace, GitHub, Vercel/Railway/Render/etc. where env vars or secrets are hosted. These are all loosely coupled but transitively trusted.

So compromising any one of them becomes a threat vector. In other words, if System A trusts System B, and System B trusts System C, then System A trusts System C. This is also why OpenClaw is frightening from a security perspective.

Also, this is a good reminder to run audits. Run `npm audit` on a typical Next.js project and you’ll probably see DoS vulnerabilities, ReDoS issues, Prototype pollution, code injection paths, handlebars etc. I'm sure you'll find something unexpected if you don't have routine code hygiene checks.

Comment by jsomau 1 day ago

Neon, the Vercel recommended database storage integration, doesn't use the sensitive option for the environment variables it manages including the database connection string/password and need to be rotated then deleted and manually set up as sensitive.

Comment by tomaskafka 1 day ago

Vercel, a deployment shell script turned billion dollar company, turned global liability. A story older than time.

Comment by Izmaki 1 day ago

A "limited subset of customers" could be 99% of them and the phrase would still be technically true.

Comment by eviks 1 day ago

Not when limited to human communication

Comment by BrianneLee011 20 hours ago

The real story isn't Vercel. It's that a Context.ai employee got infostealer'd in February and four months later that single compromise propagated through an 'Allow All' Google Workspace OAuth grant into Vercel's env vars. This is less a Vercel incident and more the chronic OAuth-supply-chain problem finally surfacing somewhere visible.

Comment by pier25 20 hours ago

How do you go from a Google Workspace to production env vars without Vercel doing something wrong?

Comment by ctmnt 19 hours ago

Where did you see that a Context employee had credentials stolen in February? I haven't run into that particular data point.

Comment by ctmnt 19 hours ago

Not just into Vercel's env vars, but into Vercel's customer's env vars.

Comment by brazukadev 12 hours ago

The real story is Vercel letting users with access to their infrastructure install random apps not vetted by any security system.

Comment by kyle787 1 day ago

Context AI published a statement https://context.ai/security-update

> Last month, we identified and stopped a security incident involving unauthorized access to our AWS environment.

> Today, based on information provided by Vercel and some additional internal investigation, we learned that, during the incident last month, the unauthorized actor also likely compromised OAuth tokens for some of our consumer users.

Comment by ctmnt 21 hours ago

An email from Vercel came to my company at 10:47am UTC. It contained little information, and said:

> At this time, we do not have reason to believe that your Vercel credentials or personal data have been compromised.

Which is not very reassuring without actual information, since presumably they would have said the same thing on Saturday, if asked.

Comment by jtokoph 1 day ago

This announcement in its current form is quite useless and not actionable. As least people won’t be able to say “why didn’t you say something sooner?” They said _something_

Comment by rrmdp 1 day ago

Use VPS, nowadays with the help of AI it's a lot easier to set everything up, you don't need Versel at all. And of course way cheaper

Comment by landl0rd 1 day ago

Wow, maybe Cloudflare can help them secure their systems? I hear they have a pretty good WAF.

Comment by adithyasrin 1 day ago

The original link posted in the post has almost same content: https://vercel.com/kb/bulletin/vercel-april-2026-security-in...

Comment by arabsson 1 day ago

So, the Vercel post says a number of customers were impacted, but not everyone, and they will contact the people that were impacted. I wasn't contacted so does that mean I'm safe?

Comment by zuzululu 1 day ago

What is the rationale for using vercel ? I'm getting a lot of value out of cloudflare with the $5/month plan lately but my bare metal box with triple digit ram has seen zero downtime since 2015.

Comment by deaux 1 day ago

They put a massive amount of VC cash into convincing people that Next.js was "the modern way" to create a website. Then they got lucky with the timing of LLMs becoming popular while they were the hot thing, leading LLMs to default to it when creating new websites. To picture that amount of VC cash - they're at Series F, and a huge chunk of that went towards marketing.

Both have been changing as people realize it's rarely the right tool for the job, and as LLMs also become more intelligent and better at suggesting other, better options depending on what is asked for (especially Claude Opus).

Comment by apsurd 1 day ago

I really want this to be true. nextjs is a nightmare. I'm eternally disgruntled.

nextjs is also powerful due to AI. But the value is a robust interactive front-end, easily iterated, with maybe SSR backing, nothing specific to nextjs (it's routing semantics + React).

So much complexity has gone into SSR. I hate 5MB client runtime just to read text as much as anyone, but not if the tradeoff is isomorphic env with magic file first-line incantations.

Comment by consumer451 1 day ago

I have found SvelteKit really nice for SSR, and it avoids dealing with Vercel entirely.

Recent Claude models do well with it, especially after adding the official skill.

I have only recently started using it, so would love to hear about anyone else's experience.

Comment by autoexec 1 day ago

> To picture that amount of VC cash - they're at Series F, and a huge chunk of that went towards marketing.

I guess they should have put some of that marketing money into hiring someone to manage the security of their systems. It's pretty telling that they had to hire an "incident response provider" just to figure out what happened and clean up after the hack. If you treat security like something you don't have to worry about until after you've been hacked you're probably going to get hacked.

Comment by habinero 1 day ago

> they had to hire an "incident response provider" just to figure out what happened and clean up after the hack

Plenty to criticize them for, but that's totally standard and not something to ding them for. Probably something their cyber insurance has in their contract.

Forensics is its own set of skills, different from appsec and general blue team duties. You really want to make sure no backdoors got left in.

Comment by gitgud 1 day ago

I don’t think they “got lucky”. nextjs is an old project now, and for a long time it was the simplest framework to run a React website.

This is why most open source landing pages used nextjs, and if most FOSS landing pages use it, then most LLM’s have been trained on it, which means LLM’s are more familiar with that framework and choose it

There must be a term for this kind of LLM driven adoption flywheel…

Comment by deaux 1 day ago

They "got lucky" with the _timing_, as I said. Most popular web frameworks have changed every ~3 years, they got lucky that they were at their peak exactly as LLMs became popular.

Comment by codeulike 1 day ago

Slopwagon?

Comment by pier25 1 day ago

> They put a massive amount of VC cash into convincing people that Next.js was "the modern way" to create a website

My impression is Next started becoming popular mostly as a reaction against create-react-app.

Comment by mrits 1 day ago

So glad I decided to just stick with django/htmx on my project a few years ago. I invested a little time into nextjs and came to the conclusion that this can't be the way.

Comment by huflungdung 1 day ago

[dead]

Comment by senko 1 day ago

You use a free template that's done in Next.js and uses its Image component, so you need a server.

Everything runs fine locally until you try to deploy it, and bam you need 4g ram machine to run the thing.

So you host it on Vercel for free cause it's easy!

Then you want to check for more than 30 seconds of analytics, and it's pay time.

Comment by systemvoltage 1 day ago

I am not following the logic. If you’re a hobbyist, sure.

But the argument is if you’re using Vercel for production, you’re paying 5-10x what you’d pay for a VM, with 4gb.

So then what’s the rationale? You can’t be a hobbyist but also “it’s pay time” for production?

Comment by prinny_ 1 day ago

Vercel promises to engineer the pain away when it comes to deployment. The thing however is that Vercel introduced that pain in the first place by writing sub-par documentation and splitting many of NextJS functions into small parts with different cost.

Comment by rwyinuse 1 day ago

Perhaps the rationale is laziness. Maintaining VM probably takes some more effort and competence than deploying to Vercel. Some people are willing to pay to minimize effort and the need to learn anything.

Comment by ajdegol 1 day ago

Vercel auto creates deployments on pushes to branches. That was a super useful feature in beta testing web stuff.

Comment by zoul 1 day ago

Very nice developer experience. A lot of batteries included, like CDN, incremental page regeneration, image pipeline or observability. Not having to maintain a server.

I’m still planning to move elsewhere though, the vendor lock-in is not worth it and I’d like to keep our infra in the EU.

Comment by tucnak 1 day ago

All of this is available in Cloudflare $5 plan?

Comment by fontain 1 day ago

Cloudflare’s developer experience doesn’t come close, it is terrible. Cloudflare are working on it, and hopefully they’ll be a real competitor to Vercel on ease of use someday, but right now, it is painful when compared to Vercel. Cloudflare is infrastructure first, Vercel is developer experience first.

Comment by Onavo 1 day ago

Yes, CloudFlare's full of bugs and sharp edges. Not to mention the atrocious 3MB worker size limit (especially egregious in the age of ML models). They don't mention this up front in the docs and the moment you try to deploy anything non trivial it's oops time to completely re architect your app.

Comment by kentonv 1 day ago

> Not to mention the atrocious 3MB worker size limit

That's for the free plan.

Limits are documented here:

https://developers.cloudflare.com/workers/platform/limits/#w...

Comment by Onavo 1 day ago

Well it's so far from Vercel that it's not even funny any more.

Good work on workers though, maybe the next generation of sandstorm will be built on CloudFlare in a decade or so after all the bugs have been hammered out.

Comment by dandaka 1 day ago

Every three months I'm trying to deploy to Cloudflare from Monorepo and I hadn't have success yet. While Vercel works every time from the box. Maybe I could dig deeper and try to understand how it works, but I'm super lazy to do that.

Comment by rs_rs_rs_rs_rs 1 day ago

In my experience it severely lacks on developer experience, compared to Vercel.

Comment by gherkinnn 1 day ago

I haven't used Cloudflare and am the first to shit on Vercel. But I have to say, some aspects of their hosting are nice. In many ways it really is just a terminal command and up it goes with good tooling around it. For example, the PR previews take zero setup and just work. Managing your projects is easy, it's all nicely designed, it integrates well with Next and some other frontend-heavy systems and so on.

Comment by apsurd 1 day ago

Render is really good at this too. I specifically chose "not Vercel" when looking into hosting. Though I haven't tried both to compare: render has been a pleasure, just works, and auto deploys per branch also available.

Comment by kandros 1 day ago

For many people Vercel is Easy (not simple)

Knowing how to operate a basic server is perceived as hard and dangerous by many, especially the generation that didn’t have a chance to play with Linux for fun when growing up

Comment by drewnick 1 day ago

Great point on the playing with Linux growing up, it's second nature to me now.

I am always feeling like I'm doing something wrong running bare metal based on modern advice, but it's low latency, simple, and reliable.

Probably because I've been using linux since Slackware in the 90s so it's second nature. And now with the CLI-based coding tools, I have a co-sysadmin to help me keep things tidy and secure. It's great and I highly recommend more people try it.

Comment by kingleopold 1 day ago

it's free for newbies and everyone, ofc it's a trap but freemium model gets people. aws can cost easily few thousands with 2-3 mistakes and clicks. vercel makes you start free then if you grow they bill you 10x-100x aws

Comment by arealaccount 1 day ago

I dunno I put a lot of traffic through Vercel, maybe 100k visitors per day, and it was under a few hundred a month. I think a couple EC2 instances behind a load balancer would cost similar or more. I was under the impression that its still a VC subsidized service.

They regularly try to get me to join an enterprise plan but no service cutoff threats yet.

Comment by hephaes7us 1 day ago

You could probably serve 100k visitors from a $5 VPS, depending on the application.

That said, I understand people are paying for basically not having to think about infrastructure, and agree that that's theoretically worth money, if they could do it well.

Comment by dev360 1 day ago

For a lot of folks, I think its ease of deployment when using Next.js. I switched to astro, also doing a lot of cloudflare at the moment. Before that, I was doing OpenNext with sst.dev on AWS but it started feeling annoying.

Comment by hdkfov 1 day ago

Out of curiosity what are you using cloudflare for that it costs $5 and who do you use for the baremetal box?

Comment by victorbjorklund 1 day ago

If you are using nextjs it is easier because vercel done a lot of things to make it a pain to host outside of vercel.

Comment by glerk 1 day ago

NextJs requires what exactly? Running a nodejs server? I mean yes, it takes a bit more time to set up than one-command deploy to Vercel. But in 2026, even this setup overhead can be cut down to minutes by telling your favorite LLM agent to SSH into your server and set it up for you.

Comment by Bridged7756 1 day ago

Do you have any examples?. I'm not that acquainted with the pains of deploying Next apps, though I've heard that argument being used.

Comment by Bridged7756 1 day ago

I suppose their market is one click deployments. Maybe for non technical people or people not willing to deal with infra.

Comment by glerk 1 day ago

There really isn't any if you are running a serious product.

They have a free tier plan for non-commercial usage and a very very good UX for just deploying your website.

Many companies start using Vercel for the convenience and, as they grow, they continue paying for it because migrating to a cheaper provider is inconvenient.

Comment by arkits 1 day ago

Develop experience. Ephemeral deploys. Decent observability. Decent CI options. Generous free tier.

Comment by sidcool 1 day ago

Can one host a Next js app on cloudflare?

Comment by phpnode 1 day ago

Comment by dennisy 1 day ago

Ohh this is very cool!

Comment by kstrauser 1 day ago

Maybe. CF’s runtime isn’t perfectly identical to Vercel’s. For instance, CF doesn’t support eval(), which is something you shouldn’t be doing often anyway, but it did mean that we can’t use the NPM protobufs package that’s a dependency for some Google SDKs.

Comment by locallost 1 day ago

I started using it a few years ago when I moved to my current company, and have to say I've learned to like it quite a bit. Moving to Cloudflare is an option, but currently it just works so we can't be bothered. Costs are not nothing, but basically no issues with it until now, and it's not so expensive that it raises eyebrows with the biggest being that we have 3 seats. The setup is quick and again it just works. We are a very small team, and the fact we don't have to deal with it on a daily/weekly basis is valuable. Obviously this current situation is a problem, but I am not sure which platform is free of issues like these. People act like it can't happen to me, until it does.

Comment by dboreham 1 day ago

It takes a while to realize you're being gaslit.

Comment by gjsman-1000 1 day ago

0.82% of homes are burglarized every year.

Meaning since 2015, you’ve got an 8.2% chance of having someone walk out with that box. Hopefully there’s nothing precious on it.

Comment by jimberlage 1 day ago

Assuming that all homes are at equal risk of being burglarized. In practice the neighborhoods I’ve seen are either at much higher risk or much lower risk.

Comment by 0123456789ABCDE 1 day ago

and burglarized homes have higher prob. of being burglarized again, and probabilities don't accumulate but compound, and is the server even in a house?

Comment by FreePalestine1 1 day ago

They didn't imply the box was at their home and that probability is off

Comment by burnte 1 day ago

If they have good backuos, no worries. Mine is in a locked colo cage in a datacenter, so I'm not worried either.

Comment by zuzululu 1 day ago

I definitely do not keep it at home but the thought has crossed me for smaller less demanding boxes.

Comment by loloquwowndueo 1 day ago

That’s not how probabilities work.

Comment by operatingthetan 1 day ago

Imagining a thief walking in and demanding the home's RAM gave me a chuckle though.

Thieves probably look for small stuff like jewelry, cash, laptops, not some big old server.

Comment by zbentley 1 day ago

Or burglars.

Comment by 0123456789ABCDE 1 day ago

yes, this is indeed how probability works. thanks.

Comment by operatingthetan 1 day ago

>you’ve got an 8.2% chance of having someone walk out with that box.

The chance of being burglarized is not the same as the chance that when you are hit, they decide to take your webserver. Think it through.

Comment by strimoza 23 hours ago

This is why I moved my video streaming app (strimoza.com) to signed URLs with short expiry times for every single request. Extra complexity but at least if something leaks, the damage is contained. Curious how many people actually audit their CDN token policies before an incident forces them to.

Comment by adithyasrin 1 day ago

We run on Vercel and I wonder if / how long before we're alerted about a leak. Quick look online suggests environment variables marked as sensitive are ok, but to which extent I wonder.

Comment by usr1106 1 day ago

Not very familiar with Vercel. Discovered them only recently when a business my brother is a customer of fell victim of a phishing attack. The "Login to Microsoft" page hosted on Vercel was still online many days later when I heard of the case.

Comment by philip1209 1 day ago

We proactively rotated keys. Even if you haven’t received an official email, expect customers to inquire about this tomorrow morning.

Comment by leetrout 1 day ago

Porter also had a breach recently. I assume it is as tightly scoped as they say to not have publicized it.

Comment by gneray 1 day ago

Comment by rubiquity 1 day ago

He doesn't work at Vercel but he is the type to never pass up any opportunity to chase clout.

Comment by dankwizard 1 day ago

He is affiliated with Vercel though

Comment by threecheese 1 day ago

Almost like that’s his job.

Hey, I’m with you - I think social media needs to die specifically for this reason. I’m reminded of the term “snake oil” - it’s like the dawn of newspapers again.

Comment by TiredOfLife 1 day ago

Media as a whole needs to die

Comment by hoppyhoppy2 1 day ago

Including books and the internet?

Comment by oxag3n 1 day ago

> incident response provider

So they use third-party for incident management? They are de-risking by spending more, which is a loose-loose for the customers.

Comment by staticassertion 1 day ago

It's very typical to have a retainer / insurance to bring in "emergency" incident responders beyond your existing team. Not saying that's the case here but it wouldn't be surprising.

Comment by eieiyo 1 day ago

Comment by ofabioroma 1 day ago

Time to ipo

Comment by gistscience 1 day ago

It is crazy that one google workspace plugin can cause this much damage!

Comment by james-clef 1 day ago

The point I am taking away here is to never use Vercel's environment variables to store secrets.

Comment by jngiam1 1 day ago

I don't get why everything is not marked as sensitive in env vars by default instead.

Comment by 1 day ago

Comment by ebbi 1 day ago

Ahhh...another product I'm boycotting, and now doubly glad I'm boycotting.

Comment by OsamaJaber 1 day ago

That's why infra needs stricter internal walls than normal SaaS

Comment by _puk 1 day ago

Hmmm, the dashboard 404 I got 6 hours ago now makes a bit more sense..

Comment by 0xy 1 day ago

This is why you pay a real provider for serious business needs, not an AWS reseller. Next.js is a fundamentally insecure framework, as server components are an anti-pattern full of magic leading to stuff like the below. Given their standards for framework security, it's not hard to believe their business' control plane is just as insecure (and probably built using the same insecure framework).

Next.js is the new PHP, but worse, since unlike PHP you don't really know what's server side and what's client side anymore. It's all just commingled and handled magically.

https://aws.amazon.com/security/security-bulletins/rss/aws-2...

Comment by embedding-shape 1 day ago

> Next.js is the new PHP, but worse, since unlike PHP you don't really know what's server side and what's client side anymore. It's all just commingled and handled magically.

Wasn't unheard of back in the day, that you leaked things via PHP templates, like serializing and adding the whole user object including private details in a Twig template or whatever, it just happened the other way around kind of. This was before "fat frontend, thin backend" was the prevalent architecture, many built their "frontends" from templates with just sprinkles of JavaScript back then.

Comment by sbarre 1 day ago

People say "Next.js is the new PHP" because it's the most popular and prominent tooling out there, and so by sheer number of available targets it's the one that comes up the most when things go wrong like this.

But there are more people trying to secure this framework and the underlying tools than there would be on some obscure framework or something the average company built themselves.

Also "pay a real provider", what does that mean? Are you again implying that the average company should be responsible for _more_ of their own security in their hosting stack, not less?

Most companies have _zero_ security engineers.. Using a vertically-integrated hosting company like Vercel (or other similar companies, perhaps with different tech stacks - this opinion has nothing to do with Next or Node) is very likely their best and most secure option based on what they are able to invest in that area.

Comment by bakugo 1 day ago

Next.js is the polar opposite of PHP, in a way.

PHP was so simple and easy to understand that anyone with a text editor and some cheap shared hosting could pick it up, but also low level enough that almost nothing was magically done for you. The result was many inexperienced developers making really basic mistakes while implementing essential features that we now take for granted.

Frameworks like Next.js take the complete opposite approach, they are insanely complex but hide that complexity behind layers and layers of magic, actively discouraging developers from looking behind the curtain, and the result is that even experienced developers end up shooting themselves in the foot by using the magical incantations wrong.

Comment by qudat 1 day ago

Totally agree. Nextjs is a vehicle to sell their PaaS, every other feature is a coincidence.

What’s worse is vercel corrupted the react devs and convinced them that RSC was a good idea. It’s not like react was strictly in good hands at Facebook but at least the team there were good shepherds and trying to foster the ecosystem.

Comment by 63stack 1 day ago

PHP had plenty of magic and footguns, magic_quotes, register_globals, mysql_real_escape_string, errors with stacktraces leaking into the HTML output by default, and these are just from the top of my head.

Comment by zrn900 1 day ago

The new PHP? PHP is the same PHP and it's still running 80% of the web to the point that even Reuters, NASA, White House are on it.

Comment by jheitzeb 1 day ago

Missing from Glasswing

Comment by nothinkjustai 1 day ago

Looks like their rampant vibe coding is starting to catch up to them. Expect to see many pre vulns like this in the future.

Comment by 1 day ago

Comment by fragmede 1 day ago

Finally got an email from Vercel saying that my account probably isn't compromised.

7:57 AM Monday, April 20, 2026 Coordinated Universal Time (UTC)

Comment by sergiotapia 1 day ago

Is the calculus breaking for these cloud providers? They are vibe coding at unsustainable speeds and shit is just breaking left and right.

Has anyone made the move to self hosting on their own servers again?

Comment by jimmydoe 1 day ago

what's the cause of the breach?

Comment by raw_anon_1111 1 day ago

Why does anyone running a third party tool have access to all of their clients’ accounts? I can’t imagine something this stupid happening with a real service provider.

I see Vercel is hosted on AWS? Are they hosting every one on a single AWS account with no tenant isolating? Something this dumb could never happen on a real AWS account. Yes I know the internal controls that AWS has (former employee).

Anyone who is hosting a real business on Vercel should have known better.

I have used v0 to build a few admin sites. But I downloaded the artifacts, put in a Docker container and hosted everything in Lambda myself where I controlled the tenant isolation via separate AWS accounts, secrets in Secret Manager and tightly scoped IAM roles, etc.

Comment by eddythompson80 1 day ago

Is AWS security boundary the AWS account? Are you expecting Vercel to provision and manage an AWS account per user? That doesn’t make any sense man, though makes sense if you’re a former AWS employee.

Comment by raw_anon_1111 1 day ago

Yes the security boundary is the AWS account.

It doesn’t make sense for a random employee who mistakenly uses a third party app to compromise all of its users it’s a poor security architecture.

It’s about as insecure as having one Apache Server serving multiple customer’s accounts. No one who is concerned about security should ever use Vercel.

Comment by eddythompson80 1 day ago

> It’s about as insecure as having one Apache Server serving multiple customer’s accounts.

You really have no clue what you’re talking about don’t you? Were you a sales guy at AWS or something?

Comment by icedchai 17 hours ago

He works for an AWS consulting company, where they promote cloud native solutions, driving cloud spend towards AWS. In many cases, managed cloud services are actually the way to go.

However, to say that serving multiple customers with Apache is "insecure" is inaccurate. There are ways to run virtual hosts under different user IDs, providing isolation using more traditional Unix techniques.

Comment by raw_anon_1111 12 hours ago

No, if they said they were running on separate VMs I wouldn’t have any issues.

Absolutely no serious company would run their web software on a shared Apache server with other tenants.

How did that shared hosting work out for Vercel?

Comment by icedchai 11 hours ago

As always, "it depends" on the application. So I've worked for several B2B SaaS companies. None of them used a VM per tenant. In some cases, we used a database (schema...) or DB cluster per tenant.

I've read about the Vercel incident. Given the timeline (22 months?!), it sounds like they had other issues well beyond shared hosting.

Comment by scarface_74 10 hours ago

There is a difference between a SaaS offer where you are running your code and serving multiple customers on one server/set of servers and running random customer code like Vercel.

Comment by icedchai 10 hours ago

I know. I just don't think code isolation was their only issue. I've read about the incident.

Comment by otterley 1 day ago

Hey, knock it off. If you disagree with someone, present a substantive counterargument.

Comment by eddythompson80 1 day ago

Already did. There is no fixing a pretender. Someone arguing akin to “the security boundary of a Linux system is the electrical strip”

Comment by raw_anon_1111 1 day ago

Well, I know that you have never heard of someone using a third party SaaS product at any major cloud provider compromising all of their customers accounts.

Are you really defending Vercel as a hosting platform that anyone should take seriously?

Comment by eddythompson80 1 day ago

How is any of that a defense of Vercel? If you understood how any of this works you’d know that it isn’t. Vercel is a manifestation of what’s wrong with web development, yet it has nothing to do with “creating an AWS account per user account” nor “running a reverse proxy process per user account”.

Comment by raw_anon_1111 1 day ago

Because the same “web development” done with v0, downloaded, put in a Docker container, deployed to Lambda, with fine grain access control on the attached IAM role (all of which I’ve done) wouldn’t have that problem.

Oh and I never download random npm packages to my computer. I build and run everything locally within Docker containers

It has absolutely nothing to do with “the modern state of web development”, it’s a piss poor security posture.

Again, I know how the big boys do this…

Comment by rvz 1 day ago

There is no serious reason to use Vercel, other than for those being locked into the NextJs ecosystem and demo projects.

Comment by allthetime 1 day ago

I recently got hit by a car on my bike. While I was starting the claim filing process the web portal for ICBC (British Columbia insurance) was acting a little funky / stalling / and then gave me a weird access error. Down at the bottom of the error page was a little grey underlined link that said “vercel”.

I’m not exactly surprised, but it seems like the unserious, ill-informed and lazy are taking over. There is absolutely zero reason why a large, essential public service should be overspending and running on an unnecessary managed service like vercel… yet, here we are.

Comment by jamesfisher 1 day ago

Reminder the Vercel CEO is a genocide supporter, if you need more reasons to move away from it.

Comment by gib444 1 day ago

You forgot the source to backup your claim

Comment by ascorbic 1 day ago

Comment by jofzar 1 day ago

Oof

Comment by zrn900 1 day ago

Crap...

Comment by tamimio 1 day ago

Another win for self-hosters, I host my own vercel (coolify) and it works well, all under my control and only expose what I want.

Comment by beyondscaletech 2 hours ago

[dead]

Comment by michaelksaleme 21 hours ago

[dead]

Comment by ItsClo688 1 day ago

[dead]

Comment by willamhou 1 day ago

[dead]

Comment by victor9000 1 day ago

[dead]

Comment by nryoo 1 day ago

[dead]

Comment by senaevren 21 hours ago

[dead]

Comment by renan_warmling 1 day ago

[dead]

Comment by mrzhangbo 1 day ago

[dead]

Comment by agent-kay 1 day ago

[flagged]

Comment by Yash16 1 day ago

[dead]

Comment by jccx70 1 day ago

[dead]

Comment by ArcherL 1 day ago

[dead]

Comment by mrzhangbo 1 day ago

[dead]

Comment by monirmamoun 1 day ago

[flagged]

Comment by jeromegv 1 day ago

I knew from that moment never to use any Vercel product. If your leadership is that compromised, you know the rest of the ship is heading into a wall.

Comment by sreekanth850 1 day ago

[flagged]

Comment by steve1977 1 day ago

While I would agree, unfortunately with JavaScript vibecoding is not even necessary to run into issues.

Comment by LunaSea 1 day ago

Because Flash apps were so safe.

Comment by scrollaway 1 day ago

Windows 95 was peak security. (/s)

Comment by Bridged7756 1 day ago

In C we don't have those issues.

Comment by ksajadi 1 day ago

[flagged]

Comment by nikcub 1 day ago

he's completely ripped that post from the person who originally found it on breach forums. absolutely shameless.

https://x.com/DiffeKey/status/2045813085408051670

Comment by yogigan 1 day ago

[flagged]

Comment by maxboone 1 day ago

How does that work, when you add an OAuth app, the resulting tokens are specific to that app having a certain set of permissions?

It's not a new attack vector as in giving too many scopes (beyond the usual "get personal details").

I am curious how this external OAuth app managed to move through the systems laterally.

Comment by efilife 1 day ago

LLM comment

Comment by steve1977 1 day ago

I'm not super savvy with OAuth, but shouldn't scopes prevent issues like this?

https://oauth.net/2/scope/

Comment by tgv 1 day ago

From what I understood at [1], Context.ai users "enable AI agents to perform actions across their external applications, facilitated via another 3rd-party service." I.e., it's designed to get someone's OAuth token and use it. Unless that is done really carefully, the risks are as high as the user's authorization goes. The danger doesn't only come from leaks, but also from agents, that can clear your db or directory at a whim.

[1] https://context.ai/security-update

Comment by steve1977 1 day ago

Oof. So much incompetence at so many levels. It's scary.

Comment by highphive 1 day ago

They can mitigate it, if the user refuses to oauth into something that asks for too much scope. Most users just click "accept" (this claim based on no data at all).

Comment by Maxious 1 day ago

> at least one Vercel employee signed up for the AI Office Suite using their Vercel enterprise account and granted “Allow All” permissions. Vercel’s internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel’s enterprise Google Workspace.

https://context.ai/security-update

Comment by steve1977 1 day ago

So it's not so much a problem with OAuth itself, but with the way it was implemented here?

Comment by owebmaster 1 day ago

Someone from marketing getting full access is absolutely a Vercel failure.

Comment by Nathanba 1 day ago

good point, we think of these OAuth logins as so safe and yet they may be the exact opposite because it's more like logging in with your master password. I think these oauth providers like Microsoft and Google need to start mandating 2FA for every company login, it's just too dangerous otherwise.

Comment by maxboone 1 day ago

How would 2FA help here, you'd still create the compromised OAuth credential with 2FA?

Comment by jongjong 1 day ago

I remember implementing OAuth2 for my platform months ago and I was using the username from the provider's platform as the username within my own platform... But this is a big problem because what if a different person creates an account with the same username on a different platform? They could authenticate themselves onto my platform using that other provider to hijack the first person's account!

Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.

Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.

Comment by maxboone 1 day ago

You should use the subject identifiers, not the usernames. You store a mapping of provider & subject to internal users yourself.

But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.

Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.

Comment by jongjong 1 day ago

Ah yeah but I wanted my platform to provide universal OAuth with any platform (that my app developer user trusts) as OAuth provider. If you rely entirely on subject identifiers; in theory, it gives one platform (OAuth provider) the ability to hijack any account belonging to users authenticating via a different platform; e.g. one platform could fake the subject identifiers of their own platform/provider to intentionally make them match that of target accounts from a different platform/provider.

Now, I realize that this would require a large-scale conspiracy by the company/platform to execute but I don't want to trust one platform with access to accounts coming from a different platform. I don't want any possible edge cases. I wanted to fully isolate them. If one platform was compromised; that would be bad news for a subset of users, but not all users.

If the maker of an application wants to trust some obscure platform as their OAuth provider; they're welcome to. In fact, I allow people running their own KeyCloak instances as provider to do their own OAuth so it's actually a realistic scenario.

This is why I used the hash approach; I have full control over the username on my platform.

[EDIT] I forgot to mention I incorporate the issuer's sub in addition to their username to produce a username with a hash which I use as my username. The key point I wanted to get across here is don't trust one provider with accounts created via a different provider.

Comment by whoamii 1 day ago

Proprietary techniques like this are usually a good indication you’re missing something. In this case it sounds like you are missing appropriate validation of the issuer and/or token itself.

Comment by jongjong 1 day ago

I want to support OAuth2, not OpenID so I don't rely on a JWT; I call the issuer's endpoint directly from my backend using their official domain name over HTTPS. I use the sub field to avoid re-allocation of usernames/emails but my point is that I don't trust it on its own; I couple it with the provider ID.

To make it universal, I had to keep complexity minimal and focus on the most supported protocol which is plain OAuth2.

Comment by hansmayer 1 day ago

[flagged]

Comment by 1 day ago

Comment by neom 1 day ago

https://x.com/theo/status/2045871215705747965 - "Everything I know about this hack suggests it could happen to any host"

He also suggests in another post that Linear and GitHub could also be pwned?

Either way, hugops to all the SRE/DevOps out there, seems like it's going to be a busy Sunday for many.

Comment by phillipcarter 1 day ago

I don't know if I'd trust some random programmer-streamer-influencer on anything other than the topic of streamer-influencing.

Comment by 1 day ago

Comment by hvb2 1 day ago

The link at the top of the page it to vercel acknowledging it...

Comment by phillipcarter 1 day ago

Vercel acknowledges a security incident, which nobody is claiming doesn't exist. What they don't acknowledge are this person's vague implications about impact elsewhere.

Comment by embedding-shape 1 day ago

Based on what, "feels like it"? Claiming that Cloudflare is affected by the same hack has to come from somewhere, but where is that coming from?

Comment by gruez 1 day ago

from his "sources".

> Here’s what I’ve managed to get from my sources:

>3. The method of compromise was likely used to hit multiple companies other than Vercel.

https://x.com/theo/status/2045870216555499636

To be fair journalists often do this too, eg. "[company] was breached, people within the company claim"

Comment by eddythompson80 1 day ago

Isn’t he a Vercel evangelist though?

Comment by TiredOfLife 1 day ago

He quite publicly is not anymore.

Comment by troupo 1 day ago

He is "whatever gives me short-term boost in popularity". Including doing 180 turns on whatever he's evangelizing or bashing.

Comment by eddythompson80 1 day ago

Fair enough. That’s probably a better description from what I’ve seen from him. I remember that arc browser shilling.

Comment by Barbing 1 day ago

Good for the content but would sponsors be on board long term?

Comment by brazukadev 1 day ago

Let's see. Roasting vercel is more popular than defending but his posts so far he seems to be defending and arguing in the replies.

Comment by troupo 1 day ago

Note: what follows is absolute 100% speculation based on nothing but gut feelings.

Theo has long been Vercel supporter and was sponsored by them several times. In this case it could be a combination of him being genuinely interested in Vercel (a rare thing) and hopes for future sponsorships

Comment by brazukadev 21 hours ago

Yes, this is exactly how I see it too minus the "genuine" part. It is because of money, and for that, he doesn't care about lying.

Comment by recursivegirth 1 day ago

Ah, Theo with his vast insights and connections into everything. That man gets around, and his content is worth it's cost.

Theo's content boils down to the same boring formula. 1. Whatever buzzword headline is trending at the time 2. Immediate sponsored ad that is supposed to make you sympathize with Theo cause he "vets" his sponsors. 3. The man makes you listen to a "that totally happened" story that he somehow always involved himself personally. 4. Man serves you up an ad for his t3.chat and how it's the greatest thing in the world and how he should be paid more for his infinite wisdom. 5. A rag on Claude or OpenAI (whichever is leading at the time) 6. 5-10 minutes of paraphrasing an article without critical thought or analysis on the video topic.

I used to enjoy his content when he was still in his Ping era, but it's clear hes drunken the YT marketer kool-aid. I've moved on, his content gets recommend now and again, but I can't entertain his non-sense anymore.

Comment by rubslopes 1 day ago

I just wanted to chime in and say I think he is knowledgeable; he's not a con. I know you didn't say that, but people might have the impression he doesn't know what he's talking about. He does know, and I've learned quite a lot from him in the past.

However, since the LLM Cambria explosion, he has become very clickbaity, and his content has become shallow. I don't watch his videos anymore.

Comment by sgarland 1 day ago

Not that I ever had confidence in his technical knowledge, but it went to zero when he confidently asserted that there was no possible way a single server could handle the massive traffic some NextJS app he had made was serving. He then posted the bill - which was about $5K IIRC - and I was able to determine from the billed runtime and memory that a modestly-spec’d RPi could in fact handle it.

Comment by well_ackshually 1 day ago

> he's not a con.

When you're putting the bar that low, sure.

He's about as knowledgeable as the junior you hired last week, except that he speaks from a position of authority and gets retweeted by the entire JS slop sphere. He's LinkedIn slop for Gen Z.

Comment by neom 1 day ago

I don't watch his content, but I felt comfortable posting his link as I believe he's generally considered a reputable guy? His tweets sometimes come up in my for you tab and he seems reasonable and knowledgable generally? Maybe I'm wrong and shouldn't have linked to him as a source.

Comment by steve_adams_86 1 day ago

He's kind of like an LLM in that his content has the surface texture of something substantial, and sometimes it's backed by substance, yet it's often half-true or totally off the mark too. You'll notice if you're previously acquainted with what he's talking about, otherwise he seems to be as you described.

I don't think he's a bad guy or that he's trying to be misleading. I suspect he wants his content to actually carry value, but he produces too much for that to be possible. Primarily he's a performer, not a technologist.

Comment by arabsson 1 day ago

I agree with this comment. YouTube's summarize this video feature has been a godsend when it comes to Theo's videos.

Comment by threetonesun 1 day ago

Nothing on x.com is reputable at this point.

Comment by enra 19 hours ago

Vercel is a Linear customer, that's why Linear was mentioned here.

Linear has not been breached, customer data remains secure, and Linear is not hosted on Vercel.

Comment by neom 19 hours ago

Yeah that's my mistake - Sorry! I just went back and and re-read this tweet: https://x.com/theo/status/2045870216555499636?s=20 - I had read it as via their Linear/GitHub - I should have known better and double checked what I read when I posted this, please accept my apology, I can't edit my original comment now.

Comment by techpression 1 day ago

”Any host” of what? That’s such a non-descriptive statement and clearly not true at face value.

Comment by rvz 1 day ago

I do remember that OpenAI did use Vercel a year ago. They might have likely moved off of it to something better.

Comment by pxc 1 day ago

OpenAI owns Contexts.ai, doesn't it?

Comment by nozzlegear 1 day ago

> @theo: "I have reason to believe this is credible. If you are using Vercel, it’s a good idea to roll your secrets and env vars."

> @ErdalToprak: "And use your own vps or k3s cluster there’s no reason in 2026 to delegate your infra to a middle man except if you’re at AWS level needs"

> @theo: "This is still a stupid take"

lol, okay. Thanks for the insight, Theo, whoever you are.

Comment by uxhacker 1 day ago

What is AWS level needs?

Comment by raw_anon_1111 1 day ago

Hell doing this with fixed price AWS Lightsale based services would be better.

Comment by nozzlegear 1 day ago

You'll have to ask @ErdalToprak on Twitter on that one. I just thought it was funny that this slopfluencer, who's taken money to advertise Vercel, ostensibly believes that using a VPS/k3s is "a stupid take."

Comment by nozzlegear 1 day ago

Theo subscribers didn't like this one

Comment by mikert89 1 day ago

Much as I want to rip on vercel, its clear that ai is going to lead to mass security breaches. The attack surface is so large, and ai agents are working around the clock. This is a new normal. Open source software is going to change, companies wont be running random repos off github anymore

Comment by sph 1 day ago

Your entire recent posting history is "software engineering is over, AI has won."

What's your agenda here?

Comment by nothinkjustai 1 day ago

The guy has like 10 thousand comments boosting AI and 600 karma, whatever his agenda is people aren’t buying it.

Comment by mikert89 1 day ago

how many recent security breaches have we seen?

Comment by hansmayer 1 day ago

Most of recent issues, including this incident, happened not due to smart superintelligent "agents" taking over the world - chatbots and other text generators are about as intelligent amd powerful as a dead starfish - but due to the combined stupidity of the said chatbots amd lazy idiots who use them to hide their own incompetence and thus produce such embarassing mistakes. A few years ago, they would be fired for exposing secrets in plain text, but since their manager wanted an AI-Workflow...

Comment by nozzlegear 1 day ago

How many can unequivocally be attributed to malicious AI?

Comment by bossyTeacher 1 day ago

Paid by a Sama minion, I bet.

Comment by Bridged7756 1 day ago

LOL. Attackers will run these agents but the thousands of maintainers will be so dumb to sit idly and get hammered with exploits. I wonder what the ratio of attackers to maintainers must be, 1:1000 is a fair assessment i take it.

Also LLMs will be used to attack only, no one will be smart to integrate it into CI flows, because everyone is that dumb. No security tools will pop up.

Comment by goalieca 1 day ago

Slop coding and makeshift sites being thrown up with abandon at breakneck speeds is going to buy me a lot of minivans.

Comment by tcp_handshaker 1 day ago

>> ai is going to lead to mass security breaches.

Let that be the end of Microsoft. Was forced to use their shitty products for years, by corporate inertia and their free Teams and Azure licenses, first-dose-is-free, curse.

Comment by lijok 1 day ago

ShinyHunters are a phishing group. What does this have to do with AI agents?

Comment by mikert89 1 day ago

Run ai agents around the clock to do hyper targeted fishing

Comment by cj 1 day ago

I feel like humans would be better at hyper targeting.

AI agents have the benefit of working at scale, probably "better" used for mass targeting.

Comment by mikert89 1 day ago

this like is saying email marketing is done better if you hand write every email. Thats true, but the hit rate is so low, that you are better off generating 1 million hyper personalized emails and firing them off into the ether

Comment by mcmcmc 1 day ago

As someone who did the former for a couple years, “better off” is subjective and dependent on your business model, particularly for B2B. It’s a trade off like anything else. You may get more leads, but they may convert at a lower rate. Sending at that scale also increases your risk of email deliverability problems. Trashing your domain has more impacts than you’d think. In smaller, targeted markets it even can damage your business reputation and hurt future sales if done poorly; word gets around.

Comment by cj 1 day ago

If you’re targeting a million people, I wouldn’t consider that a hyper targeted attack.

But I get your point.

Comment by freedomben 1 day ago

I disagree. Many humans are phishing in a different language than their native tongue, and LLMs are way better at sounding legit/professional than many of them. The best spear-phishing will still be humans, but AI definitely raises the bar.