Vercel April 2026 security incident
Posted by colesantiago 1 day ago
Comments
Comment by Vates 1 day ago
At what point do we start asking questions about the concentration of trust in the web ecosystem?
It's funny that at the engineering level we are continuously grilled in interviews about the single responsibility principle, meanwhile the industry's business model is to undermine the entirety of web standards and consolidate the web stack into a CLI.
Comment by isodev 1 day ago
Comment by intrasight 21 hours ago
Comment by moralestapia 21 hours ago
Comment by jongjong 21 hours ago
Comment by brianmcnulty 19 hours ago
Comment by nnurmanov 1 day ago
No, but most breaches today come from compromised internal accounts that are then used to break everything.
Comment by Foobar8568 1 day ago
And that's how I passed for a annoying "PM". With half of the program management complaining that I was slowing down things until 6m later, the head of risk management told them to get lost.
Comment by ethbr1 23 hours ago
That's why it's important to org-chart engineer for security, if a company is really serious.
Comment by james_marks 22 hours ago
If you answer No and complain that it’s not taken seriously, it’s at least in part because you didn’t show the risk clearly.
Comment by mar_makarov 3 hours ago
Comment by anal_reactor 1 day ago
Comment by cogogo 1 day ago
Comment by piyh 1 day ago
Comment by zbycz 22 hours ago
It is horrendous that aws doesnt allow any usage limits.
Comment by neya 1 day ago
Comment by igleria 1 day ago
Comment by lofaszvanitt 1 day ago
Comment by Neikius 1 day ago
Comment by vicchenai 1 day ago
Comment by alfiedotwtf 20 hours ago
Comment by agent-kay 1 day ago
Comment by nikcub 1 day ago
Comment by operatingthetan 1 day ago
Comment by Aurornis 1 day ago
It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?
Comment by derefr 1 day ago
Comment by ern 1 day ago
Comment by thaumasiotes 1 day ago
What do you see as the distinction between "translating" and "paraphrasing"? All translations are necessarily paraphrased.
Comment by d1sxeyes 1 day ago
Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.
Those are fairly simple because they have neat English translations, but what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”, but is really a way of telling someone it’s none of their business. You could translate it as “it’s none of your business”, or “keep your nose out” or “stay in your lane” or lots and lots of other versions, with varying levels of paraphrasing, which depend on context you can’t necessarily read purely from the words themselves.
Comment by thaumasiotes 22 hours ago
> Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.
Those obvious mappings can sometimes be too seductive for the translator's good. One example is that people translating English-loanwords-in-a-foreign-language into English usually can't help but translate them as the original English word.
Another example is that, in China, there is a cultural concept of a 狐狸精, which you might translate as "fox spirit". (The "fox" part of the translation is straightforward, but 精 is a term for a supernatural phenomenon, and those are difficult to translate.) They can do all kinds of things, but one especially well-known behavior is that they may take the form of human women and seduce (actual) human men. This may or may not be harmful to the man.
Because of this concept, the word also has a sense in which it may be used to insult a (normal) woman, accusing her of using her sex appeal toward harmful ends.
Chinese people translating this into English almost always use the word "vixen", which is, to be fair, a word that may refer to a sexy human woman or to a female fox. But I really don't feel that they're equivalent, or even that they have much overlap. (Unlike the situation with English loanwords, I think native speakers of Chinese are much more likely to choose this translation than native speakers of English are.)
> what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”
The form closest in structure to that would probably be "none of your beeswax", which is just a minorly altered version of "none of your business". I assume the substitution of "beeswax" is humorous and based on phonetic similarity.
As you note, there are multiple dimensions relevant to translating this and several positions you could take along each. For this particular idea, I would say the two most important dimensions are playfulness and rudeness; it's a very common idea and the language is rich in options for both.
> translations often vary in terms of how faithful they are to the source vs how idiomatic they are in the target language. Take for example the French phrase “j’ai fait une nuit blanche”, which literally means “I did a white night”. Clearly that’s a bad translation. A more natural translation might be “I pulled an all-nighter”.
This isn't what I had in mind. Here are some idiomatic translations:
I pulled an all-nighter.
I was up all night.
I didn't get any sleep.
I never got to bed.
I've been up since [something appropriate to the context].
[Something appropriate to the context] kept me up all night.
I wouldn't call any of the first four "more paraphrased" than the others. (The last two might be, if they included extra information.) If these were reports of the English speech of some other person, one of them (or less) would be a quote, and the others would be paraphrases. But as a report of French speech, they're all paraphrases. The first shares a little more grammatical structure with the French, which doesn't really mean much.
For a fairly similar example from my personal life, someone said to me 这是我第一次听说, and my spontaneous translation of it was "I've never heard that before", despite the fact that there is technically a perfectly valid English expression "this is the first I've heard of that".
What's closer to the grammatical structure of the Chinese? That's hard for me to say. You could analyze 我 as the subject of 听说, and I lean toward that analysis, but my instincts for Mandarin are weak. You might see 我 as being more strongly attached to 第一次, meaning something more like "my first time (to hear ...)" than "I hear (for the first time) ...".
But for whatever it's worth, a word by word literal gloss would be "this is me first time hear".
Between languages with less historical interaction than English and French, it's quite possible that a syntax-preserving translation of some sentence just doesn't exist.
Comment by politelemon 1 day ago
Comment by cyral 1 day ago
Comment by aquariusDue 1 day ago
Comment by fantasizr 1 day ago
Comment by jongjong 1 day ago
I don't doubt the competence of the Vercel team actually and that's the point. Imagine if this happens to a top company which has their pick of the best engineers, on a global scale.
My experience with modern startups is that they're essentially all vulnerable to hacks. They just don't have the time to actually verify their infra.
Also, almost all apps are over-engineered. It's impossibly difficult to secure an app with hundreds of thousands of lines of code and 20 or so engineers working on the backend code in parallel.
Some people are like "Why they didn't encrypt all this?" This is a naive way to think about it. The platform has to decrypt the tokens at some point in order to use them. The best we can do is store the tokens and roll them over frequently.
If you make the authentication system too complex, with too many layers of defense, you create a situation where users will struggle to access their own accounts... And you only get marginal security benefits anyway. Some might argue the complexity creates other kinds of vulnerabilities.
Comment by fantasizr 21 hours ago
Comment by MrDarcy 1 day ago
Comment by 00deadbeef 1 day ago
Comment by gbgarbeb 1 day ago
Comment by seattle_spring 1 day ago
Comment by dzonga 1 day ago
see [0]: Rails security Audit Report
Comment by bdcravens 1 day ago
Comment by boringg 1 day ago
Comment by guelo 1 day ago
Comment by telotortium 1 day ago
Comment by ern 1 day ago
Claude Code can produce exactly what I want, quickly.
The difference is that I don't really share my projects. People who share them probably haven't realized that code has become cheap, and no one really needs/wants to see them since they can just roll their own.
Comment by lionkor 1 day ago
Comment by michaelbuckbee 22 hours ago
Comment by echelon 1 day ago
Protection money from Vercel.
"Pay us 10% of revenue or we switch to generating Netlify code."
Comment by JLO64 1 day ago
Comment by slopinthebag 1 day ago
Comment by serhalp 1 day ago
Comment by slopinthebag 1 day ago
Comment by brazukadev 13 hours ago
Comment by neilv 1 day ago
So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.
[1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding
Comment by BigTTYGothGF 1 day ago
You don't have to live like this.
Comment by neilv 1 day ago
Comment by t0mas88 1 day ago
I've heard others had similar results with .NET/C#
Comment by TeMPOraL 1 day ago
Comment by desecratedbody 1 day ago
Comment by jazzypants 21 hours ago
Comment by siva7 1 day ago
Comment by echelon 1 day ago
Switch to vibe coding Rust backends and freeze your supply chain.
Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.
Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.
Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.
Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.
Comment by neilv 1 day ago
Comment by jazzypants 21 hours ago
Comment by slopinthebag 1 day ago
But you're also correct in that Rust is actually possible to write in a more high level way, especially for web where you have very little shared state and the state that is shared can just be wrapped in Arc<> and put in the web frameworks context. It's actually dead easy to spin up web services in Rust, and they have a great set of ORM's if thats your vibe too. Rust is expressive enough to make schema-as-code work well.
On the dependencies, if you're concerned about the possibility of future supply chain attacks (because Rust doesn't have a history like Node) you can vendor your deps and bypass future problems. `cargo vendor` and you're done, Node has no such ergonomic path to vendoring, which imo is a better solution than anything else besides maybe Go (another great option for web services!). Saying "don't use deps" doesn't work for any other language other than something like Go (and you can run `go vendor` as well).
But yeah, in today's economy where compute and especially memory is becoming more constrained thanks to AI, I really like the peace of mind knowing my unoptimised high level Rust web services run with minimal memory and compute requirements, and further optimisation doesn't require a rewrite to a different language.
Idk mate, I used to be a big Rust hater but once I gave the language a serious try I find it more pleasant to write compared to both Typescript and Go. And it's very amiable to AI if that's your vibe(coding), since the static guarantees of the type system make it easier for AI to generate correct code, and the diagnostics messages allow it to reroute it's course during the session.
Comment by OptionOfT 1 day ago
Comment by Imustaskforhelp 1 day ago
I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.
I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library
(Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)
Comment by nightski 1 day ago
Comment by gommm 1 day ago
> When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn’t there).
> When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person’s location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.
> Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]
In my experience before this change. Claude would stop, give me a few options and 70% of the time I would give it an unlisted option that was better. It actually would genuinely identify parts of the specs that were ambiguous and needed to be better defined. With the new change, Claude plows ahead making a stupid decision and the result is much worse for it.
Comment by dennisy 1 day ago
However it is less clear on how to do this, people mostly take the easiest path.
Comment by operatingthetan 1 day ago
Comment by alex7o 1 day ago
Comment by rpcope1 1 day ago
Comment by fragmede 1 day ago
Comment by gommm 1 day ago
Comment by egeozcan 1 day ago
> b. (Recommended) Do something that works now, you can always make it better later
Comment by duped 1 day ago
Comment by dennisy 1 day ago
Comment by duped 1 day ago
Comment by liveoneggs 1 day ago
Comment by hansmayer 1 day ago
Comment by duped 1 day ago
Comment by pastel8739 1 day ago
Comment by lionkor 1 day ago
Comment by neal_jones 1 day ago
The internet does that but it feels different with this
Comment by themafia 1 day ago
That's a funny way of saying "race to the bottom."
> The internet does that but it feels different with this
How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?
Comment by walthamstow 1 day ago
Comment by slashdave 1 day ago
Stack exchange. Google.
Comment by mentalgear 1 day ago
Comment by neither_color 23 hours ago
Comment by ethbr1 23 hours ago
Comment by deaux 1 day ago
Comment by betocmn 1 day ago
Comment by lmm 1 day ago
Comment by leduyquang753 1 day ago
Comment by elric 1 day ago
Comment by habinero 1 day ago
It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.
It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?
Comment by kay_o 1 day ago
This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback
I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS
So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.
Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"
Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.
The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x
In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol
Comment by sans_souse 1 day ago
This entire process is something anyone can test and reproduce; I was definitely steered towards both vercel and supabase by gemini. It isn't model specific.
Comment by habinero 1 day ago
Ahhhhhhhgh. If I ever make that cybersecurity house of horrors, that's going in it
Comment by slashdave 1 day ago
Comment by jongjong 1 day ago
I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.
It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.
I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.
It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.
Comment by lelanthran 1 day ago
That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.
Comment by mvkel 1 day ago
That lack of diversity also makes patches more universal, and the surface area more limited.
Comment by btown 1 day ago
Comment by stefan_ 1 day ago
Comment by andersmurphy 1 day ago
Comment by egeozcan 1 day ago
Comment by andersmurphy 1 day ago
These libraries/frameworks are not insecure because of bad design and dependency bloat. No! It's because a mythical LLM is so powerful that it's impossible to defend against! There was nothing that could be done.
Comment by antonvs 1 day ago
Comment by Something1234 1 day ago
Comment by egeozcan 1 day ago
I really like it. Recommended.
Comment by wonnage 1 day ago
Comment by nettlin 1 day ago
> Indicators of compromise (IOCs)
> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
https://vercel.com/kb/bulletin/vercel-april-2026-security-in...
Comment by ryanscio 1 day ago
> A Vercel employee got compromised via the breach of an AI platform customer called http://Context.ai that he was using.
> Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments.
> We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration.
> We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel.
Still no email blast from Vercel alerting users, which is concerning.
Comment by _pdp_ 1 day ago
Blame it on AI ... trust me... it would have never happened if it wasn't for AI.
Comment by gherkinnn 1 day ago
Reads like the script of a hacker scene in CSI. "Quick, their mainframe is adapting faster than I can hack it. They must have a backdoor using AI gifs. Bleep bleep".
Comment by cowsup 1 day ago
On the one hand, I get that it's a Sunday, and the CEO can't just write a mass email without approval from legal or other comms teams.
But on the other hand... It's Sunday. Unless you're tuned-in to social media over the weekend, your main provider could be undergoing a meltdown while you are completely unaware. Many higher-up folks check company email over the weekend, but if they're traveling or relaxing, social media might be the furthest thing from their mind. It really bites that this is the only way to get critical information.
Comment by gk1 1 day ago
This is not how things work. In a crisis like this there is a war room with all stakeholders present. Doesn’t matter if it’s Sunday or 3am or Christmas.
And for this company specifically, Guillermo is not one to defer to comms or legal.
Comment by brobdingnagians 1 day ago
Comment by huflungdung 1 day ago
Comment by loloquwowndueo 1 day ago
They can be brought in to do their job on a Sunday for an event of this relevance. They can always take next Friday off or something.
Comment by eclipticplane 1 day ago
Comment by loloquwowndueo 1 day ago
Comment by eclipticplane 1 day ago
Comment by lelanthran 1 day ago
For most secrets they are under your control so, sure, go ahead and rotate them, allowing the old version to continue being used in parallel with the new version for 30 minutes or so.
For other secrets, rotation involves getting a new secret from some upstream provider and having some services (users of that secret) fail while the secret they have in cache expires.
For example, if your secret is a Stripe key; generating a new key should invalidate the old one (not too sure, I don't use Stripe), at which point the services with the cached secret will fail until the expiry.
Comment by ItsClo688 1 day ago
If the attacker is moving with "surprising velocity," every hour of delay on an email blast is another hour the attacker has to use those potentially stolen secrets against downstream infrastructure. Using Twitter/X as a primary disclosure channel for a "sophisticated" breach is amateur hour. If legal is the bottleneck for a mass email during an active compromise, then your incident response plan is fundamentally broken.
Comment by steve1977 1 day ago
Wouldn't the CEO be... you know... the chief executive?
Comment by hvb2 1 day ago
Top leaders excel because they assemble a team around them they trust. You can't do everything yourself, you need to delegate. And having people in those positions also means you shouldn't be acting alone or those people will not stick around
Comment by steve1977 1 day ago
Now I will agree that there are many executives like the ones you describe. But they are not top leaders.
Comment by scott_w 1 day ago
Comment by steve1977 1 day ago
And yeah, I would expect a CEO to have enough legal knowledge to handle such a situation (customer communication) on his own.
But I also have to mentioned that I'm not in the US. Not every country has the litigation system of the US where you can basically destroy a company because you as the customer are too dumb to not spill hot coffee over yourself.
Comment by arvyy 1 day ago
presuming you're referring to the hot coffee lawsuit, maybe read details of the story. McDonalds wasn't at all blameless, and the plaintiff had reasonable demands
Comment by scott_w 1 day ago
Should the CEO also bang out some dev estimates for the roadmap because, hey, they should be competent enough to do something like that. Why not submit the accounts for the year? How hard can it be, just reading a few lines off their Sage or Quickbooks accounts?
Comment by scott_w 1 day ago
Comment by Orygin 22 hours ago
What is the use of a CEO if not to have enough depth of knowledge about the different aspects of running a business?
Like what? Poor little CEO that doesn't understand anything about the world and how to run a company. Seems like helplessness is expected at every stage.
Comment by scott_w 22 hours ago
Bit of a difference between “having depth of knowledge in their business” and “can speak off-the-cuff with the necessary accuracy to remain in compliance with every contract and legal jurisdiction their organisation is engaged in, without consulting the numerous domain experts they employ for just this purpose,” isn’t there.
Also, such a situation that requires the CEO’s direct attention has already gone FAR beyond your standard incidents where you can throw out a pre written statement. Do you want your organisation just cuffing it from the top down? Are you Elon Musk in disguise?
Comment by refulgentis 1 day ago
Comment by nnurmanov 1 day ago
Comment by gnabgib 1 day ago
Comment by wombatpm 1 day ago
Comment by ptx 1 day ago
Hmm? Who is the customer in this relationship? Is Vercel using a service provided by Context.ai which is hosted on Vercel?
Comment by pier25 20 hours ago
Comment by UltraSane 1 day ago
Comment by loloquwowndueo 1 day ago
Comment by progbits 1 day ago
Comment by tom1337 1 day ago
Comment by loloquwowndueo 1 day ago
Comment by cebert 1 day ago
Comment by pottertheotter 1 day ago
Comment by sroussey 1 day ago
Comment by SaltyBackendGuy 1 day ago
Comment by brookst 1 day ago
Better to report 100% known things quickly. People can figure it out with near zero effort, and it reduces one tiny bit of potential liability in the ops shitstorm they’re going through.
Comment by mcdow 1 day ago
Comment by newdee 1 day ago
Comment by slopinthebag 1 day ago
This feels like a natural consequence of the direction web development has been going for the last decade, where it's normalised to wire up many third party solutions together rather than building from more stable foundations. So many moving parts, so many potential points of failure, and as this incident has shown, you are only as secure as your weakest link. Putting your business in the hands of a third party AI tool (which is surely vibe-coded) carries risks.
Is this the direction we want to continue in? Is it really necessary? How much more complex do things need to be before we course-correct?
Comment by lijok 1 day ago
We need a different hosting model.
Comment by pianopatrick 1 day ago
Instead of "programs that do one thing and do it well", "write programs which are designed to be used together" and "write programs to handle text streams", I might go with a foundational philosophy like "write programs that are do not trust the user or the admin" because in applications connected to the internet, both groups often make mistakes or are malicious. Also something like "write programs that are strict on which inputs they accept" because a lot of input is malicious.
Comment by mpyne 1 day ago
It was also a different model on ownership and vetting of those focused tools. It might have been a model of having the single source tree of an old UNIX or BSD, where everything was managed as a coherent whole from grep to cc all the way to X11. Or it might have been the Linux distribution model of having dedicated packagers do the vetting to piecemeal packages into more of a bazaar, even going so far as to rip scripting language bundles into their component pieces as for Python and Perl.
But in both of those models you were put farther away from the third-party authors bringing software into the open-source (and proprietary) supply chains.
This led to a host of issues with getting new software to users and with a fractal explosion of different versions of software dependencies to potentially have to work around, which is one reason we saw the explosion of NPM and Cargo and the like. Especially once Docker made it easy to go straight from stitching an app together with NPM on your local dev seat to getting it deployed to prod.
But the issue isn't with focused tooling as much as it is with hewing more closely to the upstream who could potentially be subverted in a supply chain attack.
After all, it's not as if people never tried to do this with Linux distros (or even the Linux kernel itself -- see for instance https://linux.slashdot.org/story/03/11/06/058249/linux-kerne... ). But the inherent delay and indirection in that model helped make it less of a serious risk.
But even if you only use 1 NPM package instead of 100, if it's a big enough package you can assume it's going to be a large target for attacks.
Comment by lelanthran 1 day ago
GP said it's about taking the Unix philosophy to extremes, you say something different.
Anything taken to extremes is bad; the key word there is "extremes". There is nothing wrong with the Unix philosophy, as "do one thing and do it well" never meant "thousands of dependencies over which you have no control, pulled in without review or thought".
Comment by uecker 1 day ago
Comment by steve1977 1 day ago
Comment by esseph 1 day ago
There really isn't an option here, IMO.
1. Somebody does it
2. You do it
Much happier doing it myself tbh.
Comment by fragmede 1 day ago
Comment by 0xbadcafebee 1 day ago
Imagine if cars were developed like websites, with your brakes depending on a live connection to a 3rd party plugin on a website. Insanity, right? But not for web businesses people depend on for privacy, security, finances, transportation, healthcare, etc.
When the company's brakes go out today, we all just shrug, watch the car crash, then pick up the pieces and continue like it's normal. I have yet to hear a single CEO issue an ultimatum that the OWASP Top 10 (just an example) will be prevented by X date. Because they don't really care. They'll only lose a few customers and everyone else will shrug and keep using them. If we vote with our dollars, we've voted to let it continue.
Comment by slopinthebag 1 day ago
Comment by bdangubic 1 day ago
Comment by arcfour 1 day ago
And my own kernel. Can't trust some shit written by a Finnish dude 30 years ago.
And my own UEFI firmware. Definitely can't trust some shit written by my hardware vendor ever.
Comment by slopinthebag 1 day ago
Comment by eddythompson80 1 day ago
The AI maximalists would argue that the only way is through more AI. Vibe code the app, then ask an LLM to security review it, then vibe code the security fixes, then ask the LLM to review the fixes and app again, rinse and repeat in an endless loop. Same with regressions, performance, features, etc. stick the LLM in endless loops for every vertical you care about.
Pointing to failed experiments like the browser or compiler ones somehow don’t seem to deter AI maximalists. They would simply claim they needed better models/skills/harness/tools/etc. the goalpost is always one foot away.
Comment by uecker 1 day ago
Comment by rzzzt 1 day ago
Comment by arcfour 1 day ago
You can write good and bad code with and without AI, on a managed service, self-hosted, or something in between.
And the comment I was replying to said something about not trusting something written in Akron, OH 2 years ago, which makes no sense and is barely an argument, and I was mostly pointing out how silly that comment sounds.
Comment by eddythompson80 1 day ago
There is no “I wrote this code with some AI assistance” when you’re sending 2k line change PR after 8 minutes of me giving you permission on the repo. That’s the type of shit I’m dealing with and management is ecstatic at the pace and progress and the person just looks at you and say “anything in particular that’s wrong or needs changing? I’m just asking for a review and feedback”
Comment by slopinthebag 1 day ago
Regarding the unix philosophy argument, comparing it to AI tools just doesn't make any sense. If you look at what the philosophy is, it's obvious that it doesn't just boil down to "use many small tools" or "use many dependencies", it's so different that it not even wrong [0].
In their Unix paper of 1974, Ritchie and Thompson quote the following design considerations:
- Make it easy to write, test, and run programs.
- Interactive use instead of batch processing.
- Economy and elegance of design due to size constraints ("salvation through suffering").
- Self-supporting system: all Unix software is maintained under Unix.
In what way does that correspond to "use dependencies" or "use AI tools"? This was then formalised later to
- Write programs that do one thing and do it well.
- Write programs to work together.
- Write programs to handle text streams, because that is a universal interface.
This has absolutely nothing in common with pulling in thousands of dependences or using hundreds of third party services.
Then there is the argument that "AI is just a higher level compiler". That is akin to me saying that "AI is just a higher level musical instrument" except it's not, because it functions completely differently to musical instruments and people operate them in a completely different way. The argument seems to be that since both of them produce music, in the same way both a compiler and LLM generate "code", they are equivalent. The overarching argument is that only outputs matter, except when they don't because the LLM produces flawed outputs, so really it's just that the outputs are equivalent in the abstract, if you ignore the concrete real-world reality. Using that same argument, Spotify is a musical instrument because it outputs music, and hey look, my guitar also outputs music!
Comment by brookst 1 day ago
Comment by arcfour 1 day ago
Comment by steve1977 1 day ago
Who is Apple?
Comment by DASD 1 day ago
Comment by ivansenic 1 day ago
1) Vercel rolled out sensitive secrets on February 1, 2024, why were not all existing env vars transitioned to sensitive type? Why was there any assumption that any secret added as env var before that date was still OK to be left as "non-sensitive".
2) How was actually the Google workspace account was compromised? If context.ai was the originating issue, what actually led to the takeover? Were there too many access privileges given to the Google Workspace token context.ai had, or was there actually a workstation takeover here?
3) And finally why the hack a compromised Google Workspace account lead to someone having access to bunch of customer projects? Were is the connection? I don't get this..
Comment by tetrakai 23 hours ago
1. One or more Vercel employees - likely engineers - grant OAuth access to context.ai. They presumably did this for office-suite style features, but the OAuth request included a GCP grant for some reason, maybe laziness on context.ai's part or planned future features? Either way, Google's OAuth flow has little differentiation between "office suite" scopes and "cloud platform" scopes, so this may not have been particularly obvious to those at Vercel
2. context.ai's AWS account was compromised (unspecified how), and the Google OAuth tokens they had for customer accounts, including those for at least one Vercel employee, were taken
3. Those OAuth token(s) were used to authenticate to the GCP APIs as those Vercel employees, then allowing access to Vercel's DBs, and therefore access to customer data and secrets
Comment by ethbr1 23 hours ago
Context.ai employee searches for Roblox exploits on web
-> Context.ai support access breached by malware
-> Vercel privileged employee account who uses Context.ai breached
-> Vercel customer secrets breached
Tl;dr - insufficient endpoint protection and activity detection at Context.ai (big surprise!) + insufficient privileged account isolation at VercelComment by pier25 20 hours ago
Comment by toddmorey 1 day ago
Something happened, we won't say what, but it was severe enough to notify law enforcement. What floors me is the only actionable advice is to "review environment variables". What should a customer even do with that advice? Make sure the variable are still there? How would you know if any of them were exposed or leaked?
The advice should be to IMMEDIATELY rotate all passwords, access tokens, and any sensitive information shared with Vercel. And then begin to audit access logs, customer data, etc, for unusual activity.
The only reason to dramatically overpay for the hosting resources they provide is because you expect them to expertly manage security and stability.
I know there is a huge fog of uncertainly in the early stages of an incident, but it spooks me how intentionally vague they seem to be here about what happened and who has been impacted.
Comment by birdsongs 1 day ago
Comment by shimman 1 day ago
Oh and the owner likes to proudly remind people about his work on Google AMP, a product that has done major damage to the open web.
This is who they are: a bunch of incompetent engineers that play with pension funds + gulf money.
Comment by throwanem 1 hour ago
Comment by 1970-01-01 1 day ago
Comment by salomonk_mur 1 day ago
Comment by btown 1 day ago
> Environment variables marked as "sensitive" in Vercel are stored in a manner that prevents them from being read, and we currently do not have evidence that those values were accessed. However, if any of your environment variables contain secrets (API keys, tokens, database credentials, signing keys) that were not marked as sensitive, those values should be treated as potentially exposed and rotated as a priority.
https://vercel.com/kb/bulletin/vercel-april-2026-security-in... as of 4:22p ET
Comment by aziaziazi 1 day ago
https://vercel.com/docs/environment-variables/sensitive-envi...
Comment by throw03172019 1 day ago
Comment by aziaziazi 22 hours ago
Comment by loloquwowndueo 1 day ago
So they are harder to introspect and review once set.
It’s probably good practice to put non-secret-material in non-sensitive variables.
(Pure speculation, I’ve never used Vercel)
Comment by _heimdall 1 day ago
There are cases where I want env variables to be considered non-secure and fine to be read later, I have one in a current project that defines the email address used as the From address for automated emails for example.
In my opinion the lack of security should be opt-in rather than opt-out though. Meaning it should be considered secure by default with an option to make it readable.
Comment by jtchang 1 day ago
Comment by ctmnt 1 day ago
Comment by btown 18 hours ago
But that's just a bare-minimum defense-in-depth. The fact that an attacker was able to access the insecure variables, and likely the names of secure variables, is still horrifying.
Comment by ctmnt 16 hours ago
It’s not like I had a ton of trust in them before, but now they’ve lost almost all credibility.
Comment by gherkinnn 1 day ago
Comment by tcp_handshaker 1 day ago
Comment by esseph 1 day ago
Google in particular has been staggeringly good, and don't sleep on IBM when they Actually Care.
Comment by dd_xplore 1 day ago
Comment by gustavus 1 day ago
The Oracle that published an announcement that said "we didn't get hacked" when the hackers had private customer info?
The Oracle that does not allow you to do any security testing on their software unless you use one of their approved vendors?
The Oracle that one of my customers uses where they have to turn off the HR portal for 2 weeks before annual performance evaluations because there is no way to prevent people from seeing things?
The only reason Oracle isn't having nightmarish security problems published every other week is because they threaten to sue anyone that does find an issue.
Oracle is a joke in every conceivable way and I despise them on a personal level.
Comment by warmedcookie 1 day ago
Comment by 0xmattf 1 day ago
This and because it's so convenient to click some buttons and have your application running. I've stopped being lazy, though. Moved everything from Render to linode. I was paying render $50+/month. Now I'm paying $3-5.
I would never use one of those hosting providers again.
Comment by nightski 1 day ago
Comment by 0xmattf 1 day ago
The point is, I used to just throw everything up on a PaaS. Heroku/Render, etc. and pay way more than I needed to, even if I had 0 users, lol.
Comment by lelanthran 1 day ago
I ran a LoB webapp for multiple companies on a similar setup. Turns out 1GB of RAM is insufficient to run even the most trivial Java webapps, like Jenkins, but is more than sufficient for even non-trivial things using Go + PostgreSQL.
Your stack may be slow, not the machine.
Comment by Orygin 22 hours ago
Comment by adhamsalama 1 day ago
Comment by eatery1234 1 day ago
Comment by normie3000 1 day ago
Comment by skeeter2020 1 day ago
Comment by cleaning 1 day ago
Comment by arch-choot 1 day ago
From what I can figure out, Vercel charges "$0.60 per million invocations" [1], which would cost me $180 per day.
[0] https://news.ycombinator.com/item?id=47611454 [1] https://vercel.com/docs/functions/usage-and-pricing#invocati...
Comment by mmastrac 21 hours ago
I suspect I could do 3000+ rps with some tuning and a more modern CPU or hetzner VPS, but there's some fun cachet from running on an old Pi while there's still headroom.
Comment by 0xmattf 1 day ago
Does Vercel do the same?
Comment by somewhatgoated 1 day ago
Comment by 0xmattf 22 hours ago
Comment by anurag 1 day ago
Comment by 0xmattf 22 hours ago
But that is news to me. Interesting. Although for static sites, I always use Netlify or even GitHub pages.
Comment by cleaning 1 day ago
Comment by 00deadbeef 1 day ago
Comment by esseph 1 day ago
Comment by p_stuart82 1 day ago
Comment by rybosome 1 day ago
The only possibility for that not being a reasonable starting point is if they think the malicious actors still have access and will just exfiltrate rotated secrets as well. Otherwise this is deflection in an attempt to salvage credibility.
Comment by lo1tuma 1 day ago
Comment by elmo2you 1 day ago
While a different kind of incident (in hindsight), the other week Webflow had a serious operational incident.
Sites across the globe going down (no clue if all or just a part of them). They posted plenty of messages, I think for about 12 hours, but mostly with the same content/message: "working on fixing this with an upstream provider" (paraphrased). No meaningful info about what was the actual problem or impact.
Only the next day did somebody write about what happened. Essentially a database running out of storage space. How that became a single point of failure, to at least plenty of customers: no clue. Sounds like bad architecture to me though. But what personally rubbed me the wrong way most of all, was the insistence on their "dashboard" having indicated anything wrong with their database deployment, as it allegedly had misrepresented the used/allocated storage. I don't who this upstream service provider of Webflow is, but I know plenty about server maintenance.
Either that upstream provider didn't provide a crucial metric (on-disk storage use) on their "dashboard", or Webflow was throwing this provider under the bus for what may have been their own ignorant/incompetent database server management. I guess it all depends to which extend this database was a managed service or something Webflow had more direct control over. Either way, with any clue about the provider or service missing from their post-mortem, customers can only guess as to who was to blame for the outage.
I have a feeling that we probably aren't the only customer they lost over this. Which in our case would probably not have happened, if they had communicated things in a different way. For context: I personally would never need nor recommend something like Webflow, but I do understand why it might be the right fit for people in a different position. That is, as long as it doesn't break down like it did. I still can't quite wrap my head around that apparent single point of failure for a company the size of Webflow though.
/anecdote
Comment by _jab 1 day ago
I’m no security engineer, but this is flatly unacceptable, right? This feels like Vercel is covering its own ass in favor of helping its customers understand the impact of this incident.
Comment by hyperadvanced 1 day ago
Comment by nettlin 1 day ago
> Indicators of compromise (IOCs)
> Our investigation has revealed that the incident originated from a third-party AI tool whose Google Workspace OAuth app was the subject of a broader compromise, potentially affecting hundreds of its users across many organizations.
> We are publishing the following IOC to support the wider community in the investigation and vetting of potential malicious activity in their environments. We recommend that Google Workspace Administrators and Google Account owners check for usage of this app immediately.
> OAuth App: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
https://vercel.com/kb/bulletin/vercel-april-2026-security-in...
Comment by dev360 1 day ago
Comment by jtreminio 1 day ago
Clicking the Vercel logo at the top left of the page hard crashes my Chrome app. Like, immediate crash.
What an interesting bug.
Comment by embedding-shape 1 day ago
I find it fun we're all reading a story how Vercel likely is compromised somehow, and managed to reproduce a crash on their webpage, so now we all give it a try. Surely could never backfire :)
Comment by nozzlegear 1 day ago
Comment by LtWorf 1 day ago
Comment by bel8 1 day ago
Chrome Version 147.0.7727.101 (Official Build) (64-bit). Windows 11 Pro.
Video: https://imgur.com/a/pq6P4si
I use uBlock Origin Lite. Maybe it blocks some crash causing script? edit: still no crash when I disabled UBO.
Comment by eclipticplane 1 day ago
Comment by devld 1 day ago
Comment by Malipeddi 1 day ago
Comment by farnulfo 1 day ago
Comment by burnte 1 day ago
Comment by plexicle 1 day ago
No crash.
Now I don't want to click that "Finish update" button.
Comment by 152334H 1 day ago
Comment by itaintmagic 1 day ago
Comment by rapfaria 1 day ago
Comment by eddythompson80 1 day ago
Comment by newdee 1 day ago
This was an interesting tidbit too. If true, this means that Vercel’s IT/Infosec maybe didn’t bother enabling the allowlist and request/review features for OAuth apps in their Google Workspace.
On top of that, they almost certainly didn’t enable the scope limits for unchecked OAuth apps (e.g limiting it to sign-on/basic profile scopes).
Comment by Maxious 1 day ago
Comment by eddythompson80 1 day ago
Comment by pier25 22 hours ago
Comment by MattIPv4 1 day ago
https://x.com/theo/status/2045862972342313374
> I have reason to believe this is credible.
https://x.com/theo/status/2045870216555499636
> Env vars marked as sensitive are safe. Ones NOT marked as sensitive should be rolled out of precaution
https://x.com/theo/status/2045871215705747965
> Everything I know about this hack suggests it could happen to any host
https://x.com/DiffeKey/status/2045813085408051670
> Vercel has reportedly been breached by ShinyHunters.
Comment by otterley 1 day ago
Comment by gordonhart 1 day ago
Comment by Aurornis 1 day ago
Comment by MikeNotThePope 1 day ago
Comment by reactordev 1 day ago
Comment by nothinkjustai 1 day ago
Comment by djeastm 1 day ago
Comment by TiredOfLife 1 day ago
Comment by tom1337 1 day ago
if it's not marked as sensitive (because it is not sensitive) there is no reason to roll them. if you must roll a insensitive env var it should've been sensitive in the first place, no?
Comment by jackconsidine 1 day ago
I can imagine the reason why an env variable would be sensitive, but need to be re-read at some point. But overwhelmingly it makes sense for the default to be set, and never access again (i.e. Fly env values, GCP secret manager etc)
Comment by swingboy 1 day ago
Comment by toddmorey 1 day ago
I feel for the team; security incidents suck. I know they are working hard, I hope they start to communicate more openly and transparently.
Comment by loloquwowndueo 1 day ago
Comment by OsrsNeedsf2P 1 day ago
Comment by gib444 1 day ago
Comment by jofzar 1 day ago
Comment by bossyTeacher 1 day ago
Comment by nike-17 1 day ago
Comment by saadn92 1 day ago
Comment by waldopat 14 hours ago
Many folks here likely have some stack that looks like: Google Workspace, GitHub, Vercel/Railway/Render/etc. where env vars or secrets are hosted. These are all loosely coupled but transitively trusted.
So compromising any one of them becomes a threat vector. In other words, if System A trusts System B, and System B trusts System C, then System A trusts System C. This is also why OpenClaw is frightening from a security perspective.
Also, this is a good reminder to run audits. Run `npm audit` on a typical Next.js project and you’ll probably see DoS vulnerabilities, ReDoS issues, Prototype pollution, code injection paths, handlebars etc. I'm sure you'll find something unexpected if you don't have routine code hygiene checks.
Comment by jsomau 1 day ago
Comment by tomaskafka 1 day ago
Comment by Izmaki 1 day ago
Comment by eviks 1 day ago
Comment by BrianneLee011 20 hours ago
Comment by pier25 20 hours ago
Comment by ctmnt 19 hours ago
Comment by ctmnt 19 hours ago
Comment by brazukadev 12 hours ago
Comment by kyle787 1 day ago
> Last month, we identified and stopped a security incident involving unauthorized access to our AWS environment.
> Today, based on information provided by Vercel and some additional internal investigation, we learned that, during the incident last month, the unauthorized actor also likely compromised OAuth tokens for some of our consumer users.
Comment by ctmnt 21 hours ago
> At this time, we do not have reason to believe that your Vercel credentials or personal data have been compromised.
Which is not very reassuring without actual information, since presumably they would have said the same thing on Saturday, if asked.
Comment by jtokoph 1 day ago
Comment by rrmdp 1 day ago
Comment by landl0rd 1 day ago
Comment by adithyasrin 1 day ago
Comment by arabsson 1 day ago
Comment by zuzululu 1 day ago
Comment by deaux 1 day ago
Both have been changing as people realize it's rarely the right tool for the job, and as LLMs also become more intelligent and better at suggesting other, better options depending on what is asked for (especially Claude Opus).
Comment by apsurd 1 day ago
nextjs is also powerful due to AI. But the value is a robust interactive front-end, easily iterated, with maybe SSR backing, nothing specific to nextjs (it's routing semantics + React).
So much complexity has gone into SSR. I hate 5MB client runtime just to read text as much as anyone, but not if the tradeoff is isomorphic env with magic file first-line incantations.
Comment by consumer451 1 day ago
Recent Claude models do well with it, especially after adding the official skill.
I have only recently started using it, so would love to hear about anyone else's experience.
Comment by autoexec 1 day ago
I guess they should have put some of that marketing money into hiring someone to manage the security of their systems. It's pretty telling that they had to hire an "incident response provider" just to figure out what happened and clean up after the hack. If you treat security like something you don't have to worry about until after you've been hacked you're probably going to get hacked.
Comment by habinero 1 day ago
Plenty to criticize them for, but that's totally standard and not something to ding them for. Probably something their cyber insurance has in their contract.
Forensics is its own set of skills, different from appsec and general blue team duties. You really want to make sure no backdoors got left in.
Comment by gitgud 1 day ago
This is why most open source landing pages used nextjs, and if most FOSS landing pages use it, then most LLM’s have been trained on it, which means LLM’s are more familiar with that framework and choose it
There must be a term for this kind of LLM driven adoption flywheel…
Comment by pier25 1 day ago
My impression is Next started becoming popular mostly as a reaction against create-react-app.
Comment by mrits 1 day ago
Comment by huflungdung 1 day ago
Comment by senko 1 day ago
Everything runs fine locally until you try to deploy it, and bam you need 4g ram machine to run the thing.
So you host it on Vercel for free cause it's easy!
Then you want to check for more than 30 seconds of analytics, and it's pay time.
Comment by systemvoltage 1 day ago
But the argument is if you’re using Vercel for production, you’re paying 5-10x what you’d pay for a VM, with 4gb.
So then what’s the rationale? You can’t be a hobbyist but also “it’s pay time” for production?
Comment by prinny_ 1 day ago
Comment by rwyinuse 1 day ago
Comment by ajdegol 1 day ago
Comment by zoul 1 day ago
I’m still planning to move elsewhere though, the vendor lock-in is not worth it and I’d like to keep our infra in the EU.
Comment by tucnak 1 day ago
Comment by fontain 1 day ago
Comment by Onavo 1 day ago
Comment by kentonv 1 day ago
That's for the free plan.
Limits are documented here:
https://developers.cloudflare.com/workers/platform/limits/#w...
Comment by Onavo 1 day ago
Good work on workers though, maybe the next generation of sandstorm will be built on CloudFlare in a decade or so after all the bugs have been hammered out.
Comment by dandaka 1 day ago
Comment by rs_rs_rs_rs_rs 1 day ago
Comment by gherkinnn 1 day ago
Comment by apsurd 1 day ago
Comment by kandros 1 day ago
Knowing how to operate a basic server is perceived as hard and dangerous by many, especially the generation that didn’t have a chance to play with Linux for fun when growing up
Comment by drewnick 1 day ago
I am always feeling like I'm doing something wrong running bare metal based on modern advice, but it's low latency, simple, and reliable.
Probably because I've been using linux since Slackware in the 90s so it's second nature. And now with the CLI-based coding tools, I have a co-sysadmin to help me keep things tidy and secure. It's great and I highly recommend more people try it.
Comment by kingleopold 1 day ago
Comment by arealaccount 1 day ago
They regularly try to get me to join an enterprise plan but no service cutoff threats yet.
Comment by hephaes7us 1 day ago
That said, I understand people are paying for basically not having to think about infrastructure, and agree that that's theoretically worth money, if they could do it well.
Comment by dev360 1 day ago
Comment by hdkfov 1 day ago
Comment by victorbjorklund 1 day ago
Comment by glerk 1 day ago
Comment by Bridged7756 1 day ago
Comment by Bridged7756 1 day ago
Comment by glerk 1 day ago
They have a free tier plan for non-commercial usage and a very very good UX for just deploying your website.
Many companies start using Vercel for the convenience and, as they grow, they continue paying for it because migrating to a cheaper provider is inconvenient.
Comment by arkits 1 day ago
Comment by sidcool 1 day ago
Comment by kstrauser 1 day ago
Comment by locallost 1 day ago
Comment by dboreham 1 day ago
Comment by gjsman-1000 1 day ago
Meaning since 2015, you’ve got an 8.2% chance of having someone walk out with that box. Hopefully there’s nothing precious on it.
Comment by jimberlage 1 day ago
Comment by 0123456789ABCDE 1 day ago
Comment by FreePalestine1 1 day ago
Comment by burnte 1 day ago
Comment by zuzululu 1 day ago
Comment by loloquwowndueo 1 day ago
Comment by operatingthetan 1 day ago
Thieves probably look for small stuff like jewelry, cash, laptops, not some big old server.
Comment by zbentley 1 day ago
Comment by 0123456789ABCDE 1 day ago
Comment by operatingthetan 1 day ago
The chance of being burglarized is not the same as the chance that when you are hit, they decide to take your webserver. Think it through.
Comment by strimoza 23 hours ago
Comment by adithyasrin 1 day ago
Comment by usr1106 1 day ago
Comment by philip1209 1 day ago
Comment by leetrout 1 day ago
Comment by gneray 1 day ago
Comment by rubiquity 1 day ago
Comment by dankwizard 1 day ago
Comment by threecheese 1 day ago
Hey, I’m with you - I think social media needs to die specifically for this reason. I’m reminded of the term “snake oil” - it’s like the dawn of newspapers again.
Comment by TiredOfLife 1 day ago
Comment by hoppyhoppy2 1 day ago
Comment by oxag3n 1 day ago
So they use third-party for incident management? They are de-risking by spending more, which is a loose-loose for the customers.
Comment by staticassertion 1 day ago
Comment by eieiyo 1 day ago
Comment by ofabioroma 1 day ago
Comment by gistscience 1 day ago
Comment by james-clef 1 day ago
Comment by jngiam1 1 day ago
Comment by ebbi 1 day ago
Comment by OsamaJaber 1 day ago
Comment by _puk 1 day ago
Comment by 0xy 1 day ago
Next.js is the new PHP, but worse, since unlike PHP you don't really know what's server side and what's client side anymore. It's all just commingled and handled magically.
https://aws.amazon.com/security/security-bulletins/rss/aws-2...
Comment by embedding-shape 1 day ago
Wasn't unheard of back in the day, that you leaked things via PHP templates, like serializing and adding the whole user object including private details in a Twig template or whatever, it just happened the other way around kind of. This was before "fat frontend, thin backend" was the prevalent architecture, many built their "frontends" from templates with just sprinkles of JavaScript back then.
Comment by sbarre 1 day ago
But there are more people trying to secure this framework and the underlying tools than there would be on some obscure framework or something the average company built themselves.
Also "pay a real provider", what does that mean? Are you again implying that the average company should be responsible for _more_ of their own security in their hosting stack, not less?
Most companies have _zero_ security engineers.. Using a vertically-integrated hosting company like Vercel (or other similar companies, perhaps with different tech stacks - this opinion has nothing to do with Next or Node) is very likely their best and most secure option based on what they are able to invest in that area.
Comment by bakugo 1 day ago
PHP was so simple and easy to understand that anyone with a text editor and some cheap shared hosting could pick it up, but also low level enough that almost nothing was magically done for you. The result was many inexperienced developers making really basic mistakes while implementing essential features that we now take for granted.
Frameworks like Next.js take the complete opposite approach, they are insanely complex but hide that complexity behind layers and layers of magic, actively discouraging developers from looking behind the curtain, and the result is that even experienced developers end up shooting themselves in the foot by using the magical incantations wrong.
Comment by qudat 1 day ago
What’s worse is vercel corrupted the react devs and convinced them that RSC was a good idea. It’s not like react was strictly in good hands at Facebook but at least the team there were good shepherds and trying to foster the ecosystem.
Comment by 63stack 1 day ago
Comment by zrn900 1 day ago
Comment by jheitzeb 1 day ago
Comment by nothinkjustai 1 day ago
Comment by fragmede 1 day ago
7:57 AM Monday, April 20, 2026 Coordinated Universal Time (UTC)
Comment by sergiotapia 1 day ago
Has anyone made the move to self hosting on their own servers again?
Comment by jimmydoe 1 day ago
Comment by raw_anon_1111 1 day ago
I see Vercel is hosted on AWS? Are they hosting every one on a single AWS account with no tenant isolating? Something this dumb could never happen on a real AWS account. Yes I know the internal controls that AWS has (former employee).
Anyone who is hosting a real business on Vercel should have known better.
I have used v0 to build a few admin sites. But I downloaded the artifacts, put in a Docker container and hosted everything in Lambda myself where I controlled the tenant isolation via separate AWS accounts, secrets in Secret Manager and tightly scoped IAM roles, etc.
Comment by eddythompson80 1 day ago
Comment by raw_anon_1111 1 day ago
It doesn’t make sense for a random employee who mistakenly uses a third party app to compromise all of its users it’s a poor security architecture.
It’s about as insecure as having one Apache Server serving multiple customer’s accounts. No one who is concerned about security should ever use Vercel.
Comment by eddythompson80 1 day ago
You really have no clue what you’re talking about don’t you? Were you a sales guy at AWS or something?
Comment by icedchai 17 hours ago
However, to say that serving multiple customers with Apache is "insecure" is inaccurate. There are ways to run virtual hosts under different user IDs, providing isolation using more traditional Unix techniques.
Comment by raw_anon_1111 12 hours ago
Absolutely no serious company would run their web software on a shared Apache server with other tenants.
How did that shared hosting work out for Vercel?
Comment by icedchai 11 hours ago
I've read about the Vercel incident. Given the timeline (22 months?!), it sounds like they had other issues well beyond shared hosting.
Comment by scarface_74 10 hours ago
Comment by icedchai 10 hours ago
Comment by otterley 1 day ago
Comment by eddythompson80 1 day ago
Comment by raw_anon_1111 1 day ago
Are you really defending Vercel as a hosting platform that anyone should take seriously?
Comment by eddythompson80 1 day ago
Comment by raw_anon_1111 1 day ago
Oh and I never download random npm packages to my computer. I build and run everything locally within Docker containers
It has absolutely nothing to do with “the modern state of web development”, it’s a piss poor security posture.
Again, I know how the big boys do this…
Comment by rvz 1 day ago
Comment by allthetime 1 day ago
I’m not exactly surprised, but it seems like the unserious, ill-informed and lazy are taking over. There is absolutely zero reason why a large, essential public service should be overspending and running on an unnecessary managed service like vercel… yet, here we are.
Comment by jamesfisher 1 day ago
Comment by tamimio 1 day ago
Comment by beyondscaletech 2 hours ago
Comment by michaelksaleme 21 hours ago
Comment by ItsClo688 1 day ago
Comment by willamhou 1 day ago
Comment by victor9000 1 day ago
Comment by nryoo 1 day ago
Comment by senaevren 21 hours ago
Comment by renan_warmling 1 day ago
Comment by mrzhangbo 1 day ago
Comment by agent-kay 1 day ago
Comment by Yash16 1 day ago
Comment by jccx70 1 day ago
Comment by ArcherL 1 day ago
Comment by mrzhangbo 1 day ago
Comment by monirmamoun 1 day ago
Comment by jeromegv 1 day ago
Comment by sreekanth850 1 day ago
Comment by steve1977 1 day ago
Comment by LunaSea 1 day ago
Comment by scrollaway 1 day ago
Comment by Bridged7756 1 day ago
Comment by ksajadi 1 day ago
Comment by nikcub 1 day ago
Comment by yogigan 1 day ago
Comment by maxboone 1 day ago
It's not a new attack vector as in giving too many scopes (beyond the usual "get personal details").
I am curious how this external OAuth app managed to move through the systems laterally.
Comment by efilife 1 day ago
Comment by steve1977 1 day ago
Comment by tgv 1 day ago
Comment by steve1977 1 day ago
Comment by highphive 1 day ago
Comment by Maxious 1 day ago
Comment by steve1977 1 day ago
Comment by owebmaster 1 day ago
Comment by Nathanba 1 day ago
Comment by maxboone 1 day ago
Comment by jongjong 1 day ago
Thankfully I patched this issue just before it became a viable exploit because the two platforms I was supporting at the time had different username conventions; Google used email addresses with an @ symbol and GitHub used plain usernames; this naturally prevented the possibility of username hijacking. I discovered this issue as I was upgrading my platform to support universal OAuth; it would have been a major flaw had I not identified this. This sounds similar to the Vercel issue.
Anyway my fix was to append a unique hash based on the username and platform combination to the end of the username on my platform.
Comment by maxboone 1 day ago
But this has been a problem in the past where people would hijack the email and create a new Google account to sign in with Google with.
Similarly, when someone deletes their account with a provider, someone else can re-register it and your hash will end up the same. The subject identifiers should be unique according to the spec.
Comment by jongjong 1 day ago
Now, I realize that this would require a large-scale conspiracy by the company/platform to execute but I don't want to trust one platform with access to accounts coming from a different platform. I don't want any possible edge cases. I wanted to fully isolate them. If one platform was compromised; that would be bad news for a subset of users, but not all users.
If the maker of an application wants to trust some obscure platform as their OAuth provider; they're welcome to. In fact, I allow people running their own KeyCloak instances as provider to do their own OAuth so it's actually a realistic scenario.
This is why I used the hash approach; I have full control over the username on my platform.
[EDIT] I forgot to mention I incorporate the issuer's sub in addition to their username to produce a username with a hash which I use as my username. The key point I wanted to get across here is don't trust one provider with accounts created via a different provider.
Comment by whoamii 1 day ago
Comment by jongjong 1 day ago
To make it universal, I had to keep complexity minimal and focus on the most supported protocol which is plain OAuth2.
Comment by hansmayer 1 day ago
Comment by neom 1 day ago
He also suggests in another post that Linear and GitHub could also be pwned?
Either way, hugops to all the SRE/DevOps out there, seems like it's going to be a busy Sunday for many.
Comment by phillipcarter 1 day ago
Comment by hvb2 1 day ago
Comment by phillipcarter 1 day ago
Comment by embedding-shape 1 day ago
Comment by gruez 1 day ago
> Here’s what I’ve managed to get from my sources:
>3. The method of compromise was likely used to hit multiple companies other than Vercel.
https://x.com/theo/status/2045870216555499636
To be fair journalists often do this too, eg. "[company] was breached, people within the company claim"
Comment by eddythompson80 1 day ago
Comment by TiredOfLife 1 day ago
Comment by troupo 1 day ago
Comment by eddythompson80 1 day ago
Comment by Barbing 1 day ago
Comment by brazukadev 1 day ago
Comment by troupo 1 day ago
Theo has long been Vercel supporter and was sponsored by them several times. In this case it could be a combination of him being genuinely interested in Vercel (a rare thing) and hopes for future sponsorships
Comment by brazukadev 21 hours ago
Comment by recursivegirth 1 day ago
Theo's content boils down to the same boring formula. 1. Whatever buzzword headline is trending at the time 2. Immediate sponsored ad that is supposed to make you sympathize with Theo cause he "vets" his sponsors. 3. The man makes you listen to a "that totally happened" story that he somehow always involved himself personally. 4. Man serves you up an ad for his t3.chat and how it's the greatest thing in the world and how he should be paid more for his infinite wisdom. 5. A rag on Claude or OpenAI (whichever is leading at the time) 6. 5-10 minutes of paraphrasing an article without critical thought or analysis on the video topic.
I used to enjoy his content when he was still in his Ping era, but it's clear hes drunken the YT marketer kool-aid. I've moved on, his content gets recommend now and again, but I can't entertain his non-sense anymore.
Comment by rubslopes 1 day ago
However, since the LLM Cambria explosion, he has become very clickbaity, and his content has become shallow. I don't watch his videos anymore.
Comment by sgarland 1 day ago
Comment by well_ackshually 1 day ago
When you're putting the bar that low, sure.
He's about as knowledgeable as the junior you hired last week, except that he speaks from a position of authority and gets retweeted by the entire JS slop sphere. He's LinkedIn slop for Gen Z.
Comment by neom 1 day ago
Comment by steve_adams_86 1 day ago
I don't think he's a bad guy or that he's trying to be misleading. I suspect he wants his content to actually carry value, but he produces too much for that to be possible. Primarily he's a performer, not a technologist.
Comment by arabsson 1 day ago
Comment by threetonesun 1 day ago
Comment by enra 19 hours ago
Linear has not been breached, customer data remains secure, and Linear is not hosted on Vercel.
Comment by neom 19 hours ago
Comment by techpression 1 day ago
Comment by rvz 1 day ago
Comment by pxc 1 day ago
Comment by nozzlegear 1 day ago
> @ErdalToprak: "And use your own vps or k3s cluster there’s no reason in 2026 to delegate your infra to a middle man except if you’re at AWS level needs"
> @theo: "This is still a stupid take"
lol, okay. Thanks for the insight, Theo, whoever you are.
Comment by uxhacker 1 day ago
Comment by raw_anon_1111 1 day ago
Comment by nozzlegear 1 day ago
Comment by nozzlegear 1 day ago
Comment by mikert89 1 day ago
Comment by sph 1 day ago
What's your agenda here?
Comment by nothinkjustai 1 day ago
Comment by mikert89 1 day ago
Comment by hansmayer 1 day ago
Comment by nozzlegear 1 day ago
Comment by bossyTeacher 1 day ago
Comment by Bridged7756 1 day ago
Also LLMs will be used to attack only, no one will be smart to integrate it into CI flows, because everyone is that dumb. No security tools will pop up.
Comment by goalieca 1 day ago
Comment by tcp_handshaker 1 day ago
Let that be the end of Microsoft. Was forced to use their shitty products for years, by corporate inertia and their free Teams and Azure licenses, first-dose-is-free, curse.
Comment by lijok 1 day ago
Comment by mikert89 1 day ago
Comment by cj 1 day ago
AI agents have the benefit of working at scale, probably "better" used for mass targeting.
Comment by mikert89 1 day ago
Comment by mcmcmc 1 day ago
Comment by cj 1 day ago
But I get your point.
Comment by freedomben 1 day ago