GitHub's fake star economy

Posted by Liriel 1 day ago

Counter772Comment365OpenOriginal

Comments

Comment by whatisthiseven 22 hours ago

I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.

Here are the things I look at in order:

* last commit date. Newer is better

* age. old is best if still updating. New is not great but tolerable if commits aren't rapid

* issues. Not the count, mind you, just looking at them. How are they handled, what kind of issues are lingering open.

* some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!

What does stars tell me? They are an indirect variable caused by the above things (driving real engagement and third interest) or otherwise fraud. Only way to tell is to look at the things I listed anyway.

I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.

I hope the FTC comes down hard on this.

Edit:

* commit history: just browse the history to see what's there. What kind of changes are made and at what cadence.

Comment by bsuvc 21 hours ago

> I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would

I do it all the time, whenever there are competing libraries to choose among.

It's a heuristic that saves me time.

If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.

I also look at download count and release frequency. Basically I don't want to use some obscure dependency for something critical.

Comment by swiftcoder 20 hours ago

> If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.

There are clearly inflection points where stars become useful, with "nobody has ever used this package" and "Meta/Alphabet pays to develop/maintain this package" on the two extremes.

I'm less sure what the signal says in-between those extremes. We have 2 packages, one has 5,000 stars, the other has 10,000 stars - what does this actually tell me, apart from how many times each has gone viral on HN?

Comment by bigiain 10 hours ago

At the 10,000 star level, my worry is going to be how tempting a target it is for a supply chain attack. (Most likely via one of it's dependencies.)

Comment by bonesss 11 hours ago

If the goals relate to maintenance and viability, we’re looking for a minimum threshold of implementation. Amazon and Microsoft have a lot of stars, both have ‘more than enough’ to care.

If the goals are marketing or targeting or mass-market appeal or hiring pools then those stars say something else.

Comment by nradov 10 hours ago

And that's fine if you're just writing a toy program for personal use. But it's deeply problematic if you have to rely on that library for anything important. This type of lazy approach to the software bill-of-materials has gotten a lot of organizations into trouble with exploitable security flaws.

Comment by godelski 3 hours ago

  > It's a heuristic that saves me time.
Sounds like it is wasting your time.

Just because you make a decision quicker doesn't mean you saved any time. It is good to save time, but not at the sake of quality. You spend more buying cheap boots, and they don't even keep your feet dry.

Comment by matt_kantor 20 hours ago

> If one library has 1,000 stars and the other has 15, I'm going to default to the 1,000 stars.

Will you continue to do this after reading TFA?

Comment by hnben 1 hour ago

<10 Stars is a strong signal, that a repo is not relevant to anyone except maybe the maintainer. This fact does not change even if other repos have bough 10.000 stars.

Comment by system2 14 hours ago

More stars = More followers = More people interested and contribute. Even with fake, there will be more people joining the project because they are duped. Still though it is going to get more attention.

Comment by abustamam 15 hours ago

I will. I have other heuristics too, like there are plenty of starred libraries that are just hard to use or don't actually fit my use case. But if I choose the 1000 star one and it works easily, then cool. If it doesn't, I'll try the 15 star one. If it works, cool. If not, then I'll probably end up vibe coding my own thing.

Comment by manquer 14 hours ago

Why not? Buying stars is also a positive signal on commitment.

i.e. if the maintainer is serious enough to buy stars, is not in theory likely to spend time /money in maintaining /improving the project also ?.

Presumably he wouldn't just want fake users but also real users, which is a signal than a just purely hobby project, that is vibe-coded on a whim over a weekend and abandoned?

Comment by kylecazar 12 hours ago

It's a positive signal for fraud and willingness to deceive.

Comment by justcool393 11 hours ago

> i.e. if the maintainer is serious enough to buy stars, is not in theory likely to spend time /money in maintaining /improving the project also ?.

i mean if maintainers clearly spend much more time and effort on fraud than actually improving the project, why should I at all believe they would, let alone trust their judgement with regards to other things such as technical choices for example

Comment by LtWorf 12 hours ago

The problem is that you will mis-evaluate unknown projects that aim to be (or already are) VC funded.

Comment by whatisthiseven 20 hours ago

> It's a heuristic that saves me time.

A bad one.

I listed many other useful heuristics. Do you not find value in them? Do you find stars more valuable than them?

Take a moment to consider stars as a useful metric may only be useful for packages created prior to ~2015 when they weren't such a strong vanity metric, and are already very well established. This is preconditioning you to think "stars can still sometimes be useful, because I took a look at Facebook's React GH and it has a quarter million stars".

Sure, it's useful for that. But you aren't going to evaluate if the "React" package is safe. You already trivially know it is.

You'll be evaluating packages like "left-pad". Or any number of packages involved in the latest round of supply chain attacks.

For that matter, VCs are the ones stars are being targeted at, and potential employers (something this article doesn't cover, but some potential hires do hope to leverage on their resume).

If you are a VC, or an employer, it is a negative metric. If you are a dev evaluating packages, it is a vacuous metric that either tells you what you already know, or would be better answered looking at literally anything else within that repo.

The article also called out how download count can be faked trivially. I admit I have relied upon this in the past by mistake. Release frequency I do use as one metric.

When I care about making decisions for a system that will ingest 50k-250k TPS or need to respond in sub-second timings (systems I have worked on multiple times), you can bet "looking at stars" is a useless metric.

For personal projects, it is equally useless.

I care about how many tutorials are online. And today, I care more about if there was enough textual artifacts for the LLMs to usefully build it into their memory and to search on. I care if their docs are good so I spend less tokens burning through their codebase for APIs. I care if they resolve issues in a timely manner. I care if they have meaningful releases and not just garbage nothings every week.

I didn't mean for this to sound like a rant. But seriously, I just can't imagine in any scenario where "I look at stars" as a useful metric. You want to add it to the list? Sure. That is fine. But it should not be a deciding factor. I have chosen libraries with less stars because it had better metrics on things I cared about, and it was the correct choice (I ended up needing to evaluate them both anyhow. But I had my preference from the start).

Choosing the wrong package will waste you so much more time. Spend the 5 minutes evaluating for stuff that is important to your project.

Comment by strbean 16 hours ago

Having stars isn't a positive metric, it's more that not having stars is a disqualifier unless I want to use someones brand new toy.

My first scan of a GitHub repository is typically: check age of latest commit, check star count, check age of project. All of these things can be gamed, but this weeds out the majority of the noise when looking for a package to serve my needs. If the use case is serious, proper due diligence follows.

Comment by oreally 18 hours ago

This behavior is similar from the time I played a very popular mmorpg - when people selected others for their groups, their criteria deferred to the candidate's analyzed gameplay records (their 'logs') on a website which boiled down to a number showing their damage dealt and the color of it's text.

There was nothing about going into the logs to see if they could do the game's mechanical challenges, minimizing their damage taken. It made for a worse environment yet the players couldn't stop themselves from using such criteria.

In short, humans are lazy and default to numbers and colors when given the chance. When others question them on it, they can have a default easy answer of being part of the herd of zebras to get out of trouble.

Comment by deltaplan 4 hours ago

[dead]

Comment by doctorpangloss 12 hours ago

you're arguing with people who are fundamentally unempathetic. it's a lot of words spent on vamping about stars instead of contributing to stuff everyone actually uses, which hardly anyone does, which should tell you everything you need to know about the value of having open source users: most of them only care that something is free as in beer.

Comment by bsuvc 9 hours ago

Are you suggesting that because I consider stars, I must be a freeloader?

Comment by creesch 14 hours ago

> * last commit date. Newer is better

To be honest, these days I have more faith in an application or library with a moderate development pace where maybe the last commit wasn't 2 seconds ago co-authored by claude (in the most blatant examples).

The same is true for amount of commits, the type of commits, release cadence and the amount of fixes and hotfixes in releases. I don't feel like being a glorified alpha tester so I look for maturity in a project.

Which more often than not means that, yes there needs be activity. But, it is also fine if it was two days ago and there is a clear sign of the same pattern over a longer period. Combined with a stable release cycle, sane versioning and clear changelogs that aren't just a list of the last 10 commit messages.

On your point of stars, I think they used to be a valid metric in a similar category. Namely, community behind the software. But it has been a while since that has been true. It certainly hasn't been for a while, ever since I saw these star tracking graphs pop up on repos I knew that there was no sense in paying attention to them anymore.

Comment by lukasgelbmann 21 hours ago

I use stars to try and protect myself from dependency confusion attacks.

For example, let’s say I want to run some piece of software that I’ve heard about, and let’s say I trust that the software isn’t malware because of its reputation.

Most of the time, I’d be installing the software from somewhere that’s not GitHub. A lot of package managers will let anyone upload malware with a name that’s very similar to the software I’m looking for, designed to fool people like me. I need to defend against that. If I can find a GitHub repo that has a ton of stars, I can generally assume that it’s the software I’m looking for, and not a fake imitator, and I can therefore trust the installation instructions in its readme.

Except this is also not 100% safe, because as mentioned in TFA, stars can be bought.

Comment by whatisthiseven 20 hours ago

Sure, I suppose that is one solution, but given that buying stars has been around for at least 5 years, and I have been aware of people faking stars for longer than that, I am not sure why you would rely on stars as a primary metric.

There are many other far more useful metrics to look at first, and to focus on first, and to think about. Every time you think about stars, you'll forget the other stuff, or discount it in favor of stars.

Forget stars. They now no longer mean anything. Even if they did before, they don't anymore.

Comment by ziml77 18 hours ago

Interesting that 5 years ago is exactly when this page showed up according to the Wayback Machine: https://docs.github.com/en/get-started/exploring-projects-on...

In it they explicitly call it out as a ranking metric

> Many of GitHub's repository rankings depend on the number of stars a repository has. In addition, Explore GitHub shows popular repositories based on the number of stars they have.

Yet another case of metric -> target -> useless metric

Comment by MrSandingMan 21 hours ago

What does "TFA" mean here please?

Comment by alternatetwo 21 hours ago

I think it's "The fucking article".

Comment by inanutshellus 20 hours ago

Yes and to be clear, one uses "TFA" to imply annoyance that TFA hasn't been read.

e.g. "TFA covers this already."

Comment by lukasgelbmann 20 hours ago

That’s not something I wanted to imply. It can also stand for "the fine article". Is there a better shorthand for "the article linked at top of the page" / "the original article"?

Comment by ssl-3 11 hours ago

TFA works fine either way. It's OK that it is subject to interpretation.

Comment by inanutshellus 18 hours ago

Nope, one simply says "the article".

Comment by tom_ 21 hours ago

The article. Pick whatever adjective you like beginning with F!

Comment by AgentMatt 19 hours ago

The featured article.

Comment by bsuvc 21 hours ago

The fucking article.

Comment by notamario 1 hour ago

With this approach, which I wholly agree with, many stars means approximately nothing, but few stars signals risk.

It’s a bloom filter of sorts for finding the right library.

Comment by p2detar 20 hours ago

Totally agree with you. I think Github "stars" are a relic of the past. They should be renamed to "Bookmarks" and exist as a tool for users to just mark interesting repositories. By no means should a repository keep a count of how many people bookmarked it. It makes no practical sense. Active maintainers and commit dates are much better metric.

Comment by dredds 17 hours ago

> Active maintainers and commit dates are much better metric.

But in an age of bots/agents, that's just kicking the can down the road by making it easier to fudge regular activity of practically zero importance. Even worse for the ecosystem than paid like counts.

Comment by onlyrealcuzzo 10 hours ago

As someone else pointed out... When commits are this cheap, if that's the metric to be gamed, it will be gamed.

You just create 5 GitHub accounts, and spread your Claude Code commits to 5 separate accounts to make it look like there's 5 active contributors.

If anything, we're better off with a fake star economy that is the main thing most people are trying to game, so the signal to noise can still be that it (at least so far) seems pretty easy to tell how many REAL active contributors there are.

Though, I should note, 2 heads are not always better than 1.

I'm more interested in a repository that has commits only from two geniuses than a repository that has 100s of morons contributing to it.

Comment by psychoslave 21 hours ago

You call these baubles, well, it is with baubles that men are led... Do you think that you would be able to make men fight by reasoning? Never. That is only good for the scholar in his study. The soldier needs glory, distinctions, and rewards.

https://en.wikiquote.org/wiki/Napoleon

Comment by grayhatter 18 hours ago

> some of the code. No one is evaluating all of the code of libraries they use. You can certainly check some!

I do.

I don't review the whole repo, but every single time I update dep versions, I always look at the full diff between the two. It doesn't take that long

Comment by netdevphoenix 22 hours ago

> I don't think I have ever used stars in making a decision to use a library and I don't understand why anyone would.

You might not have but the makers of dependencies that you use might so still problematic.

Comment by whatisthiseven 21 hours ago

True, but that is beyond my control. I am not evaluating every package within a dependency tree unless something happens, out of practicality.

I have limited time on this Earth and at my employer. My job is not critical to life. I am comfortable with this level of pragmatism.

Comment by netdevphoenix 26 minutes ago

The same response applies to the dependencies that you choose.

Comment by nozzlegear 16 hours ago

> I always treated stars like a bookmark "I'll come back to this project" and never thought of it as a quality metric. Years ago when this problem first surfaced I was surprised (but should not have been in retrospect) they had become a substitute for quality.

Same here. I've starred over 1500 projects on Github over the years, and only because I wanted to save them for later use or as a reference for something I was working on. These days I'll occasionally use the star metric as a signal to avoid certain projects as overhyped (especially if the project has a stars-per-day meter).

Comment by kevinsync 21 hours ago

I usually use stars as bookmarks to maybe come back to some repo I thought looked interesting a year later. Terrible metric to invest based on!

Comment by rpdillon 19 hours ago

Agree! My longstanding metric uses just two values:

* Most recent commit

* Total number of commits

This might have to die in the era of AI, but it's served me well for a long time. Rather than how many people are paying attention, it tries to measure the effort put in.

Comment by creesch 14 hours ago

> This might have to die in the era of AI,

Sadly that is probably true.

At the very least I'd add release cadence to it and the quality of releases. Mature, good software will have hotfixes and patch releases every now and then. But not in every release and certainly not 50% of the changes. In the same sense I will often look at the effort put in changelogs. If they took the effort of putting things in category, writing about possible breaking changes, etc it is a possible indicator of some level of quality. At the very least I will have a lot more faith in software with good changelogs compared to something that is just a list of the last N commit messages.

Comment by eek2121 11 hours ago

Honestly, anyone using Github as a basis for hiring to begin with is approaching hiring with flawed thinking. Github isn't the only source for git, and git isn't the only standard for version control. Further, Github has been pushing companies AWAY from the platform thanks to high costs and other nonsense. I've seen more than one company either run a local git server or something like a local git lab instance. Using github as a metric just ensures that you eliminate anyone not using github. That includes many amazing open source devs, for example.

Comment by Brian_K_White 21 hours ago

But to someone else, it is a meaningful metric that you bookmarked something. It doesn't matter that the star isn't you saying you liked something. It's already telling enough merely that you wanted to bookmark it.

It's only not meaningful because of how other people can game it and fabricate it, but everything you just said, if it was only people like you, that would be a very meaningful number.

It doesn't even matter why you bookmarked it, and it doesn't matter that whatever the reason was, it doesn't prove the project as a whole is overall good or useful. Maybe you bookmarked it because you hate it and you want to keep track of it for reference in your ted talk about examples of all the worst stuff you hate, but really by the numbers adding up everyone's bookmarks, the more likely is that you found something interesting. It doesn't even matter what was interesting or why. The entire project could be worthless and the thing you're bookmarking was nothing more than some markdown trick in the readme. That's fine. That counts. Or it's all terrible, not a single thing of value, and the only reason to bookmark it is because it's the only thing that turned up in a search. Even that counts, because that still shows they tried to work on something no one else even tried to work on.

It's like, it doesn't matter how little a given star means, it still does mean something, and the aggregation does actually mean something, except for the fact of fakes.

Comment by whatisthiseven 20 hours ago

> it still does mean something

Yes...which is why I said it is an indirect variable, as caused by the other things I pointed out above. Age, quality, code, utility, whether issues are addressed, interest, etc. Or fraud. Pretty cut and dry.

FWIW, I almost never star repos. Even ones I use or like. I don't see the utility for myself.

Aim for a more concise post and don't couch your statements in doubt next time if you want a productive conversation, because I don't know what you are trying to say.

Comment by toss1 11 hours ago

>>The FTC's 2024 rule banning fake social influence metrics carries penalties of $53,088 per violation - and the SEC has already charged startup founders for inflating traction metrics during fundraising

Six million fake stars is just what this small crew found, likely in a matter of hours.

A fine of $53,088 times six million is 318.528 billion.

Just going hard after a small portion of that should both put an end to it and a slight dent in the deficit.

This kind of fraud is rampant because everyone concludes the way to win is not to make a real advance, but to simply game the system. Seems they are not wrong because the lack of enforcement makes the rules meaningless.

Comment by coalstartprob 10 hours ago

any repo with stalebot = lots of CLOSED unresolved issues

Comment by q3k 19 hours ago

I also never in my career have consciously looked at the GH star counter on a repo, let alone used it to make decisions.

Instead I look at (in addition to the above):

1. Who is the author? Is it just some person chasing Internet clout by making tons of 'cool' libraries across different domains? Or are they someone senior working in an industry sector from which project might actually benefit in expertise?

2. Is the author working alone? Are there regular contributors? Is there an established governance structure? Is the project going to survive one person getting bored / burning out / signing an NDA / dying?

3. Is the project style over substance? Did it introduce logos, discord channels, mascots too early? Is it trying too hard to become The New Hot Thing?

4. What are the project's dependencies? Is its dependency set conservative or is it going to cause supply chain problems down the line?

5. What's the project's development cadence? Is it shipping features and breaking APIs too fast? Has it ever done a patch release or backported fixes, or does it always live at the bleeding edge?

6. NEW ARRIVAL 2026! Is the project actually carefully crafted and well designed, or is it just LLM slop? Am I about to discover that even though it's a bunch of code it doesn't actually work?

7. If the project is security critical (handles auth, public facing protocol parsing, etc.): do a deeper dive into the code.

Comment by gobdovan 22 hours ago

These kinds of articles make you feel like there are specific, actionable problems that just need an adjustment and then they disappear. However, the system is much worse than you'd expect. Studies like this are extremely valuable, but they don't address the systematic problems affecting all signaling channels: most signals themselves have been manufactured into a product.

Build a SaaS and you'll have "journalists" asking if they can include you in their new "Top [your category] Apps in [current year]", you just have to pay $5k for first place, $3k for second, and so on (with a promotional discount for first place, since it's your first interaction).

You'll get "promoters" offering to grow your social media following, which is one reason companies may not even realize that some of their own top accounts and GitHub stars are mostly bots.

You'll get "talent scouts" claiming they can find you experts exactly in your niche, but in practice they just scrape and spam profiles with matching keywords on platforms like LinkedIn once you show interest, while simultaneously telling candidates that they work with companies that want them.

And in hiring, you'll see candidates sitting in interview farms quite clearly in East Asia, connecting through Washington D.C. IPs, present themselves with generic European names, with synthetic camera backgrounds, who somehow ace every question, and list experience with every technology you mention in the job post in their CVs already (not hyperbole, I've seen exactly this happen).

If a metric or signal matters, there is already an ecosystem built to fake it, and faking it starts to be operational and just another part of doing business.

Comment by xorcist 21 hours ago

Well put!

Have an upvote. The first one is free.

Comment by DoctorOW 17 hours ago

> If a metric or signal matters, there is already an ecosystem built to fake it, and faking it starts to be operational and just another part of doing business.

https://www.xkcd.com/2899/

Comment by vachina 22 hours ago

It all boils down to making more money.

Comment by gobdovan 22 hours ago

Yeah, but it's not a great way to do it.

Short term, you pay the cost of fake signaling, which is simply deadweight loss. People spend resources to inflate signals instead of improving the actual thing.

Medium term, I suppose you could see how it increases consumption, since users would probably try something with 100k stars instead of 2, GitHub wants to seem that it's used more than it really is, repo owner is also benefiting.

Long term, the correspondence between how important a (distorted) system is perceived (Github, OSS, IT in general) vs how important it really is collapses quite abruptly and unnecessarily, and you end up with a lemon market [0] where signals stop being reliable at all.

[0] https://en.wikipedia.org/wiki/The_Market_for_Lemons

Comment by mankins 21 hours ago

The spoilage by money is half right, but I think the more interesting part is where the money ends up and how that influences the system.

I'm increasingly convinced the issue isn't feedback itself, but centralized, global, aggregated feedback that becomes game-able without stronger identity signals.

Right now the incentives are tied (correctly or not) to these global metrics, so you get a market for faking them, with money flowing to whoever is best at juicing that signal.

If instead the signal was based on actual usage and attributions by actual developers, the incentives shift. With localized insight (think "Yeah, I like Golang") it becomes both harder to fake and harder to get at the metric rollup.

Useful reputation on the web is actually much more localized and personal. I gladly receive updates on and would support the repos I've starred. If I could chose where to put my dollars (not an investors), it would likely include the list of repos I've personally curated.

This suggests a different direction: instead of asking "how many stars does this have?", ask "who is actually depending on this, and in what context?" or better retroactively compare your top-n repos to mine and we'll get a metric seen through our lenses. If you want to include everyone in that aggregation you'll end up where we are now, but if in stead you chose the list, well, the stars could align as a good metric once more.

The interesting part is that the web already contains most of that information, we just don't treat identity as a part of the signal (yet? universally?).

Comment by aquariusDue 20 hours ago

Tangential but I have more than 5k repos starred (according to my GitHub profile) organized into lists but the way I discover interesting stuff on GitHub these days is through people I follow. Follow interesting people, find interesting stuff. Sometimes it's that easy.

What's more it became obvious to me two or so years ago that GitHub is going the way of LinkedIn slowly but surely. Lots of professionals on there just because it's expected of them, some interact occasionally with the "social media" aspect of it and fewer still really thrive on that part. Time will tell how this will pan out but just look how many Developer and Linux influencers became huge on YouTube and other places this last year. Most of them barely had more than 10k subscribers 3 years ago and now people look to them for their next tech stack and hot framework/tool/library/distro and so on.

Comment by philipallstar 22 hours ago

Of course - money is a good proxy for value in these instances. Not perfect, but good.

Comment by 17 hours ago

Comment by motakuk 22 hours ago

At the end it's a company choice: do you buy BS metrics or you don't.

We've recently decided to complicate life of AI bots in our repo https://archestra.ai/blog/only-responsible-ai, hoping they will just choose those AI startups who are easier to engage with.

Comment by whattheheckheck 21 hours ago

Yeah imagine how nature feels with all of the fake eyes and other fake predator signals like bright colors. Evolution finds a way

Comment by gobdovan 20 hours ago

A way for what? Evolution does not have a will, does not work for anybody in particular, and it's statistically likely that it will work against you, as almost all species that ever were went extinct and almost no individual from just 15 generations ago has left DNA to present people. [0] [1]

[0] http://www.stat.yale.edu/~jtc5/papers/Ancestors.pdf [1] https://pubmed.ncbi.nlm.nih.gov/11542058/

Comment by donatj 22 hours ago

I run a tiny site that basically gave a point-at-able definition to an existing adhoc standard. As part of the effort I have a list of software and libraries following the standard on the homepage. Initially I would accept just about anything but as the list grew I started wanting to set a sort of notability baseline.

Specifically someone submitted a library that was only several days old, clearly entirely AI generated, and not particularly well built.

I noted my concerns with listing said library in my reply declining to do so, among them that it had "zero stars". The author was very aggressive and in his rant of a reply asked how many stars he needed. I declined to answer, that's not how this works. Stars are a consideration, not the be all end all.

You need real world users and more importantly real notability. Not stars. The stars are irrelevant.

This conversation happened on GitHub and since then I have had other developers wander into that conversation and demand I set a star count definition for my "vague notability requirement". I'm not going to, it's intentionally vague. When a metric becomes a target it ceases to be a good metric as they say.

I don't want the page to get overly long, and if I just listed everything with X star count I'd certainly list some sort of malware.

I am under no obligation to list your library. Stop being rude.

Comment by utopiah 22 hours ago

> When a metric becomes a target it ceases to be a good metric as they say.

https://en.wikipedia.org/wiki/Goodhart's_law

Comment by kindkang2024 21 hours ago

Nice to know the name for this — Goodhart's Law. And I think the core reason is that the cost to fake these metrics is far less than what they claim to represent. Stars, reviews, ratings, trading volumes — all cheap to manufacture, and only getting cheaper with AI.

I've been thinking about this a lot. These metrics are all just marketing signals to draw people's attention, trying to make some kind of deals. So the fix should be: make the cost of the signal match what it claims to represent. I'm obsessed with something called DUKI /djuːki/ (Decentralized Universal Kindness Income, a form of UBI) — the idea is that instead of stars or reviews, trust comes from deals pledging real money to the world for all as the deal happens. You can't fake that cheaply.

So the metric becomes the money itself — if you fake X amount, it costs you X, and the world will thank you by paying attention...

Imagine if GitHub let you back a star with real money — the more you put in, the more credible the star. And that money goes out as UBI for everyone. For attention makers, star anything you want, as much as you want. For attention takers, just follow the money to filter through all the noise that's so easy to manipulate...

Comment by utopiah 17 hours ago

> make the cost of the signal match what it claims to represent.

Well that's the WHOLE problem of trust. There is so much work on blockchain in proof of work, proof of stake, etc in order to protect ourselves from attacks, e.g. https://en.wikipedia.org/wiki/Sybil_attack

If you do find a way it would apply to a lot more than "just" GitHub star for VCs.

Comment by kindkang2024 8 hours ago

> Well that's the WHOLE problem of trust.

Great point. It all comes down to trust.

Some are masters at setting attention traps. They manipulate all these cheap metrics that normal people naturally pay attention to, confusing potential deal parties, serving self-interest while increasing risk and causing harm to the other side.

> It would apply to a lot more than "just" GitHub star for VCs

Yes. It would apply to a lot more than "just" GitHub stars for VCs — even more so if the 'interactions' are naturally deal-based.

Imagine a metric for proof of work named DUKI-ALM. If you give away $100 to the world, you gain 100 DUKI-ALM — absolutely equal to the cost.

Think of it as tips contributed jointly by the taker and maker of a deal, paid out to the world. The DUKI-ALM signal is the sum of all tips. They tipped $10? The metric value is 10.

Products that have the ability to make more deals and generate more surplus will naturally contribute more — and gain more signal of trust. Sybil attacks are prevented by design, since what's the point of attacking if you still need to tip the world 100 USDT to gain 100?

I'd love to hear if you see a hole in this — the cost of the signal matches exactly what it claims to represent.

Comment by mauvehaus 21 hours ago

Can anyone explain why on earth VC's are making actual investment decisions based on imaginary internet points? This would be like an NFL team drafting a quarterback based on how many instagram followers they have rather than a relevant metric like pass completion, or god forbid, doing some work and actually scouting candidates. Maybe the Cleveland Browns would do that[0], but it's not a way to mount a serious Super Bowl campaign[1].

Are VC's just that lazy about making investment decisions? Is this yet another side-effect of ZIRP[2] and too much money chasing a return? Is nobody looking too hard in the hope of catching the next rocket to the moon?

From the outside, investing based on GitHub stars seems insane. Like, this can't be a serious way of investing money. If you told me you were going to invest my money based on GitHub stars, I'd laugh, and then we'd have an awkward silence while I realize there isn't a punchline coming.

[0] I'm from Cleveland. I get to pick on them.

[1] https://en.wikipedia.org/wiki/List_of_Cleveland_Browns_seaso... I think their record speaks for itself.

[2] https://en.wikipedia.org/wiki/Zero_interest-rate_policy

Comment by arjie 19 hours ago

Because the entire point is to be early to something here. If you wait for profitability, the guy is already funded. So you have to use proxies, and the proxies will be imperfect, but you don't have to be perfect. You just need some degree of performance. Stars are (were) an early indicator of community interest and predictably became goodharted when this became known. But I think it's been since 2022 since anyone seriously used stars for VC targeting so this is sort of old hat.

It's a bit like the old article about evaluating software companies on whether they have version control or not. Everyone has version control now.

Comment by Aurornis 20 hours ago

Social proof has always been a factor in investments. Not the only factor, but seeing signs of popularity has always been an input to investment decisions.

The entire game of startup investing is to identify breakout companies early. Social proof (when valid, not faked) of interest is one of the strongest signals of product market fit.

If a product has a lot of attention (users, headlines, stars, downloads, DAU) that’s a signal that it could also have a lot of customers some day. This is also why all of those metrics are targets for manipulation.

> This would be like an NFL team drafting a quarterback based on how many instagram followers they have

Major sports team are about engaging fans. If a promising recruit had a huge social media presence then that could be a contributing factor toward trying to recruit that player.

This is actually easier to understand if you look at the inverse: Some times there are players with amazing stats but who have a cloud of controversy following them. Teams will skip over these problematic players despite their performance because having popular and engaging players is important for teams but having anti-popular players will drive away fans.

Comment by xnorswap 21 hours ago

I don't follow American Football so I don't know how coaching contracts work for you guys, but how does someone go 1W 15L one season, survive as head coach to go 0W 16L the next season, and still start the next season after that as head coach?

Over here the fans would be singing "You're getting sacked in the morning" halfway through that first season.

I guess not having relegation makes things slightly less ruthless for you.

Comment by mkovach 21 hours ago

The prevailing narrative here is that the team was actively looking to lose to acquire draft picks. Hugh Jackson was extremely good at losing, so he stayed.

The owner of the Cleveland Browns uses the team to generate more revenue. For NFL teams, performance has little to do with their value or ability to generate additional revenue.

There is no strong financial incentive to win in the NFL, aside from the owner's ego. The Browns' owner's ego is driven by money, and the result shows on the field.

Comment by datsci_est_2015 19 hours ago

> For NFL teams, performance has little to do with their value or ability to generate additional revenue.

Like an allegory for performative capitalism in America. Profit and quality completely decoupled in the wake of market capture (rent seeking).

Comment by triceratops 14 hours ago

> The prevailing narrative here is that the team was actively looking to lose to acquire draft picks

But if they don't care about winning, why bother getting good draft picks?

Comment by xnorswap 1 hour ago

From doing more research about this, it seems they don't want to use the good draft picks, but to sell them on to teams that do want to win.

The draft pick is itself a commodity that can be traded*, so by losing they get a premium commodity, that they can sell on, and by selling their picks they ensure that they continue to lose to get the valuable commodity.

They even lobbied to tweak rules around selling draft picks: https://www.reuters.com/sports/browns-ask-nfl-allow-draft-pi...

* This seems completely absurd to me, but perhaps there would just be backroom deals otherwise, and having it sanctioned brings it into the light?

Comment by rhplus 20 hours ago

Yeah, exactly. The NFL is a closed system franchise. The same 32 teams play every season whether they win or lose. No team risks relegation to a lower revenue league. Every team gets a roughly equal share of the franchise revenue regardless of performance.

Comment by bombcar 21 hours ago

Owners don’t care about winning, but about profitability. And you can make a lot of money with a failing football team (selling/trading draft picks, etc) and your fans get used to losing …

Comment by xnorswap 21 hours ago

Right, I forgot you guys have "The Draft", so failing is an advantage, doubly so if you can sell your draft picks, because then you can keep losing by having sold away the mechanism for getting you competitive again.

I am so glad the proposed "European super league" was killed off so hard, so that we don't get a franchise model, it produces so many adverse incentives.

Comment by bombcar 20 hours ago

The thing I like about EU football is that if your team sucks arse through a garden hose too long, your entire team gets demoted to a lower league.

That would put a fire under some asses!

Comment by rbonvall 14 hours ago

Note that it's not a European thing. It's how most football leagues around the world work.

Comment by triceratops 14 hours ago

Drafts and no relegation.

Comment by lordgrenville 21 hours ago

More to the point, in the US losing teams get rewarded in the form of draft picks, which sometimes creates perverse incentives. This doesn't exist in European football. (Disclaimer: I know almost nothing about American sports.)

Comment by bombcar 20 hours ago

Draft picks + salary caps and the various workarounds involved there make it more of a financier's dream than a competitive sport.

Comment by The_Blade 21 hours ago

just wait until you get to the subject of tanking in the NBA

is there a tech equivalent? like you do a crappy job with your series A on purpose which helps you get a better series B. although there is the notion of a big round of layoffs to secure further investment

Comment by mauvehaus 21 hours ago

In truth, I don't follow sports much, but I'm really not sure either.

I do find the model European Football (soccer) using promotion and relegation to be much more interesting, both from the standpoint of culling out perennially hopeless teams from top-tier competition, and for having a place for people to play who aren't absolute superstars.

Comment by The_Blade 21 hours ago

Browns fan in. We're owned by a criminal (truck stop-related fraud) who was convinced by a homeless person to draft Johnny Manziel. trust me, we want to put him (and Paul Dolan) into graveyard orbit. but it's not like Vercel where you can just go use AWS or Cloudflare or whatever; and it's not like switching makes you weak, you stand by your team through the hard times!

plus, what is an NFL fan going to do, stop watching football? hahahahahaha

Comment by mkovach 21 hours ago

Hey, you can say that the Dolans should/could spend more, but I don't think you really want an owner who has solidified the team in Cleveland, has the fourth-best record in baseball over the past 10 years, and has seven recent playoff appearances in the graveyard.

The Haslams? Yeah, they should really sell the team, but I figure in about 10-15 years, they'll move it out of Cleveland.

Comment by 19 hours ago

Comment by BigTTYGothGF 19 hours ago

> plus, what is an NFL fan going to do, stop watching football? hahahahahaha

Former Seahawks fan here, it's easier than you think. (It wasn't their record, I stuck with them through the 90s after all, it was realizing what CTE meant for the players).

Comment by gopher_space 15 hours ago

As far as I could tell everyone got a Sounders jersey in the mail as part of the Sonics move.

Comment by bena 17 hours ago

If Jackson signed a three-year contract, then the Browns would be paying him for three years regardless. Even if they fired him. Then he could go work for another team and get paid by both teams.

Regardless, a coach is given some leeway their first season. They were coming off a 3-13 season, so 1-15 isn't that much of a drop. Jackson could make the case that he needed another season to build his ideal roster.

Then after going 0-16, they were on track to get Mayfield. He could have made the case that if he can't win with Mayfield, then maybe he just can't win.

Then he didn't win with Mayfield.

Comment by riazrizvi 14 hours ago

VCs are middlemen. It's not their money. As long as they can find a narrative to raise money, make a commission and do damage control on their reputation after the fallout, then their side pieces will never want for nothing.

Comment by sidewndr46 20 hours ago

Hiring a QB based off instagram followers isn't even that unrealistic. If you can put together a team to win the Superbowl, sure do it! If not, just get together a team that people enjoy following and watching. Much of that would be putting athletes on the field that people engage with.

Comment by bigfishrunning 19 hours ago

I agree! Maybe the Browns have never played a Superbowl, but I would imagine they are still making a profit. Different goal.

Comment by neom 20 hours ago

As far as I recall it stared in 2014 or so, yes metrics could/were still gamed, but there was still a belief in VC that OSS projects could turn into Red Hats. First I heard of it was when a VC told me they were "looking for the next docker" and mentioned something about Rancher OS and how quickly it's stars/follows were growing. In VC you tend to have conviction builders, and conviction buyers. I suppose what happened was some conviction builders used growth of a project on gh as part of a leading indicator (valid or otherwise), and conviction buyers picked up on that as a method.

Comment by ninjahawk1 16 hours ago

$1M-$10M is basically nothing for the big companies investing, they can literally mark it off as costs, and if one of them so happens to actually succeed they make it all back plus some.

It’s not that they’re necessarily careless, it’s just that the bigger the net the more fish you catch. And when you own both all the fishing boats and all the nets…might as well cast wide.

Comment by bombcar 21 hours ago

This has happened in multiple industries a number of times - publishers discover that people with large twitter followings sell a decent number of books, so they start selecting new authors who only have large twitter followings, and discover is was correlation and not causation.

And once it gets out that it’s a selection criteria it gets gamed to hell and back.

Comment by mauvehaus 21 hours ago

I appreciate this, but the stakes have to be a lot lower bringing a book to market than making a $1,000,000 to $10,000,000 seed investment. I'd sort of expect that when you're dealing with sums of money that size there would be some grown-ups in the room.

Comment by consp 20 hours ago

You just stated Goodharts law in effect.*

[*] https://en.wikipedia.org/wiki/Goodhart%27s_law

Comment by ngruhn 20 hours ago

Also known as Goodharts law.

Comment by 4er_transform 19 hours ago

No VC makes an investment off the star count. It’s a signal to identify opportunities in the noise.

Once surfaced, there’s other signals to filter if an initial conversation is even worth it.

Assuming everyone else is just stupid and it’s all luck is a good way to hold yourself back from your potential.

Comment by jcalx 19 hours ago

> This would be like an NFL team drafting a quarterback based on how many instagram followers they have rather than a relevant metric like pass completion, or god forbid, doing some work and actually scouting candidates. Maybe the Cleveland Browns would do that

Not quite the same, but the New York Jets (one of the few NFL teams that can match the dysfunction of the Browns — they have the longest active playoff drought in big 4 North American sports) passed on a few successful players because the owner, Woody Johnson, reportedly didn't like their Madden (video game) ratings [0]:

> A few weeks later, Douglas and his Broncos counterpart, George Paton, were deep in negotiations for a trade that would have sent Jeudy to the Jets and given future Hall of Fame quarterback Aaron Rodgers another potential playmaker. The Broncos felt a deal was near. Then, abruptly, it all fell apart. In Denver’s executive offices, they couldn’t believe the reason why.

> Douglas told the Broncos that Johnson didn’t want to make the trade because the owner felt Jeudy’s player rating in “Madden NFL,” the popular video game, wasn’t high enough, according to multiple league sources. The Broncos ultimately traded the receiver to the Cleveland Browns. Last Sunday, Jeudy crossed the 1,000-yard receiving mark for the first time in his career.

...

> Johnson’s reference to Jeudy’s “Madden” rating was, to some in the Jets’ organization, a sign of Brick and Jack’s influence. Another example came when Johnson pushed back on signing free-agent guard John Simpson due to a lackluster “awareness” rating in Madden. The Jets signed Simpson anyway, and he has had a solid season: Pro Football Focus currently has him graded as the eighth-best guard in the NFL.

[0] https://www.nytimes.com/athletic/6005172/2024/12/19/woody-jo...

Comment by gk-- 21 hours ago

> This would be like an NFL team drafting a quarterback based on how many instagram followers they have rather than a relevant metric

sounds like how the ufc does it

Comment by jazzypants 19 hours ago

It's strange that I don't even get defensive about people picking on the Browns anymore. Weirdly, them giving a serial rapist over $200,000,000 was actually good for my mental health long-term. After 30 years of tying myself in knots trying to explain away their idiocy, I don't have to be weighed down by their terrible decisions anymore.

Comment by jt2190 15 hours ago

>

Union Labs is the most consequential case. It was ranked #1 on Runa Capital's ROSS Index for Q2 2025 - a widely cited VC industry report identifying the "hottest open-source startups" - with 54.2x star growth and 74,300 stars. Our analysis found 32.7% zero-repo accounts, 52% zero-follower accounts, and a fork-to-star ratio of 0.052. The StarScout analysis flagged it with 47.4% suspected fake stars. An influential investment-sourcing report that VCs rely on was topped by a project with nearly half its stars suspected as artificial.

Comment by boondongle 16 hours ago

Almost all industries have divested themselves from the thing thing they supposedly had expertise in towards exploiting crowdsourced alternatives and then pocketing the extra money.

Record labels did this with soundcloud or only picking up people who already had a following. Movies have done this repeatedly with adaptation (due respect to people who like their books remade faithfully, there's a reason the 70's/80's were that decade and it basically stopped once Comics/LotR arrived). A24 represents a disruption, not a normal studio. Books did this with webnovels or paying dirt cheap and making the Author market.

What people tend to forget is they're just resource gatekeepers. They could just choose to invest in offices with cats because an office with cats popped massively one time and you can't say they're wrong, because there's no alternative funding you can get to A/B test with. In theory there are different firms - but they often went to the same schools, same peer group, same fraternity/sororities, and once they're in the wild they all know each other. It's not a different behavior if it's VC or if it's Nashville.

The real question is how long before either governmental-busting or someone notices the lack of care with money and shops alternatives. In theory this is also partially why American firms face international risk - lacking people respecting their laziness, someone can break their model.

That said - I'm not saying they're not smart - just that often there's a tendency to delude that shortcuts taken represent a "good job" rather than "no one can really say we're doing it poorly".

Comment by ertgbnm 19 hours ago

Instagram follows is not a good way to hire football players but it's probably a good way to hire instagram influencers. The football analogy is a little unfair because VCs are investing in more than just a company's ability to "play football" they are investing in the brand, the marketing, and the vision. GitHub stars are at least an indication of a startup having a promising brand or some ability to market themselves.

Nevertheless, VCs are in fact pretty dumb sometimes and it'd be stupid to invest soley based on stars.

Comment by pmichaud 15 hours ago

I think it's probably a little bit about Goodhart? At some point soon after stars were widely in use but prior to them being connected to any particular incentive I bet they were actually a great signal of... something. But then once someone started using the signal to give attention or dollars, the signal was compromised.

Comment by jt2190 15 hours ago

> … VC's are making actual investment decisions based on imaginary internet points?

The only claim here is that there’s a report that tracks GitHub star growth that is that is presumably read by VCs:

>> Union Labs is the most consequential case. It was ranked #1 on Runa Capital's ROSS Index for Q2 2025 - a widely cited VC industry report identifying the "hottest open-source startups" - with 54.2x star growth and 74,300 stars. Our analysis found 32.7% zero-repo accounts, 52% zero-follower accounts, and a fork-to-star ratio of 0.052. The StarScout analysis flagged it with 47.4% suspected fake stars. An influential investment-sourcing report that VCs rely on was topped by a project with nearly half its stars suspected as artificial.

Again, the claim here is that the report is “influential”. Maybe?

Comment by meager_wikis 14 hours ago

And just like with VC's, that one draft pick does eventually lead to a super bowl win, all the Johnny Manziel picks will be forgiven

Comment by kibwen 20 hours ago

[flagged]

Comment by dang 13 hours ago

Maybe so, but please don't call names, fulminate, or post unsubstantive comments to Hacker News. You can make your substantive points without these, and we're trying for something different here: https://news.ycombinator.com/newsguidelines.html.

Perhaps you don't owe morons, VCs, or rich people better, but you owe this community better if you're participating in it.

Comment by jordanb 20 hours ago

Listening to All In is a real eye opening experience. Especially when they have guests on and they're exactly like the regulars.

Comment by bix6 20 hours ago

I have zero respect for All In. It’s a shame people pay those guys any mind.

Comment by indymike 14 hours ago

An echo in the sounding chamber? Say it isn't so...

Comment by next_xibalba 19 hours ago

I actually think this take is wrong... but the moment Travis Kalanick was a guest and claimed that he was on the verge of discovering new physics with the aid of ChatGPT was an eye opening moment.

Comment by HoldOnAMinute 16 hours ago

"new-to-me" physics

Comment by jordanb 16 hours ago

Not even that it was just a bunch of AI-psychosis nonsense:

https://futurism.com/former-ceo-uber-ai

Comment by 20 hours ago

Comment by nipponese 19 hours ago

this is compounded by young, newly rich tech workers (no kids, no mortgage, maybe not even a car) experimenting with being a VC because they've recently reached accredited investor status.

and it's not just ZIRP. every recent IPO or liquidity event creates literally 500 more of these guys.

Comment by adamkf 19 hours ago

> maybe not even a car

Hold up — one can be mature without any of those things, but cars are especially optional.

Comment by nipponese 18 hours ago

maturity has nothing to do with it. these are recurring expense liabilities with very very distant return horizon.

Comment by a1o 16 hours ago

Depending on which city you live, my feeling is owning a car is a lot less optional once you have kids, at least in their earlier years.

Comment by LargeWu 19 hours ago

They say Silicon Valley was more of a documentary than a comedy, and now we have one more way life imitates art: A growing army of Erlich Bachmann's.

Comment by arealaccount 15 hours ago

Jian-Yang's

Comment by Esophagus4 14 hours ago

Big Heads

Comment by jasonmp85 18 hours ago

[dead]

Comment by necubi 17 hours ago

Those would be angels, not VCs. VCs manage outside money.

Comment by nipponese 16 hours ago

That's a really good distinction. I realize this is a little snide, but I imagine when they look in the mirror they still see themselves as Marc Andreessen.

Comment by 15 hours ago

Comment by bloppe 13 hours ago

VCs mostly invest other peoples' money, not their own. The "rich people" are often pension funds, endowments, and other pools of capital rather than individual morons.

VCs themselves probably suffer from chronic overestimation of their own intelligence, but there just aren't many good signals at the stage of companies they're looking at. No customers, no revenue; often just an idea and hopefully a prototype. GitHub stars are as good of a signal as letters of intent, which is to say: a bad signal, but at least a signal. Other than that, they have to just evaluate what the founders are telling them (generally unrealistically optimistic at best) and whatever market research they can do (which is hard enough for the founders to do for their own product; image doing this for a dozen different companies every day).

Of course GitHub stars are a terrible signal, but the bar for signal quality is just really low.

Comment by paulorlando 13 hours ago

> the bar for signal quality is just really low During the Covid-era when money was flowing more readily I worked with a series A startup founder on improving their unit economics. They were spending a lot on customer acquisition but after my analysis I realized that they were losing money on each customer. It didn't matter how long you ran the timeline, with churn they never broke even on new customers. When I recommended cutting marketing spend, they told me that they needed to show topline growth -- because that's what investors were looking for. And they knew from experience that the investors didn't dig into the numbers enough to realize they were growing themselves broke.

Comment by roarcher 14 hours ago

I used to be afflicted with the notion that wealth was correlated with a person's intelligence and work ethic. I was miraculously cured when I went to work for a startup that had to periodically impress VCs to raise capital.

Comment by MASNeo 13 hours ago

VCs have no money and few have the money to manage the money entrusted in them. The people that give the money have the money because they ask people to do more with less. In that pinch, one might understand how VCs look to proxies, and often fail to find great ones but at least then it wasn’t expense. Ever tried to sell something to a VC except equity - it’s an educating experience?

Comment by georgeecollins 19 hours ago

true, but the way I would frame it is we are all morons in someone else's eyes. No one is as smart as they think they are. The mistake Americans make is thinking that rich people are so smart that everything they do is smart.

Comment by kibwen 18 hours ago

Yes, but while we're all born stupid, rich people are subject to forces that actually make them dumber than the average person. Normally people learn from failure because they experience tangible negative consequences as a result of failure. But money is a better insulator than a vacuum, and once you're sufficiently wealthy, failure no longer has any tangible negative effect on your quality of life. Lose ten billion dollars? Lose 90% of your net worth? You and your kin will still be living lives of ease and luxury for generations to come. They're destined to be morons because there's no pressure forcing them to learn from their mistakes.

Comment by georgeecollins 17 hours ago

I don't really see any evidence that the poor are smarter than the rich. You can say that a wealthy person is more insulated from their mistakes, but I think poor and rich react very similarly to major financial setbacks. Yeah, a wealthy person can do something dumb all the time and not feel it, but then a poor person may be doing as proportionately badly with things like credit card fees or sports gambling. If you think you are smarter because you are a "normal person" then I think you need to consider that everyone is dumber then they realize.

Comment by kibwen 16 hours ago

> I don't really see any evidence that the poor are smarter than the rich.

This is attributable to survivorship bias. Poor people who make bad decisions either die in a gutter or die in a cell. Rich people who make bad decisions get golden parachutes and fail upward ad infinitum.

> If you think you are smarter because you are a "normal person" then I think you need to consider that everyone is dumber then they realize.

Firstly, to paraphrase Socrates, I fully own that I'm a dumbass, secondly the events of the past decade made it starkly clear that, yes, everyone is dumber than they realize, and now I'm forced to realize it, too.

Comment by zerkten 19 hours ago

I don't think this is always true, but it's true a lot. I think there are better descriptions than moronic as well. People use moronic when people are just as smart but have a different (and possibly better) direction. It's just the case that it defies the will of the other person.

These people go to the extreme and feel they have to outdo each other in an arms race to win whatever category it is today.

You can have extreme ambitions without being a moron. It's possible for someone to be empathetic, but also really driven. The problem is that they are locked in a downward spiral and they can't possibly be vulnerable. It's only when they run out of money, or some other extreme event occurs that they change tack. That's moronic, especially when the outcomes are predictable.

There is a lot to be said about SV culture and the people that surround these VCs. A lot of people love these environments and more than tolerate the environment these VC folks create. It's hardly a new phenomenon.

Comment by nradov 16 hours ago

We're all morons outside of some very narrow areas of expertise. By most criteria James Mattis would be considered a smart guy: he earned a Master's degree, commanded troops effectively in combat, and served as Secretary of Defense. And yet he fell for the Theranos fraud. You have to know your limitations, and too many people think that because they're good at one thing they must be geniuses who are good at everything. Engineers are especially prone to this delusion.

Comment by radiator 13 hours ago

> We're all morons outside of some very narrow areas of expertise

speak for yourself, I guess? Some people know things in many areas. But even if they are not experts outside of their areas of expertise, they may recognize their limitations in other areas and thus avoid making costly mistakes. This may even be the rule for adults, rather than the exception.

Comment by nradov 10 hours ago

Some people believe they know things in many areas. This is typically the Dunning-Kruger effect in action.

Comment by rvba 16 hours ago

What does a soldier / pollitical appointment know about finance?

Comment by nradov 15 hours ago

Exactly. That's the point.

Comment by cmrdporcupine 20 hours ago

The answer isn't that they're morons. It's that they aren't people who "invest" in "good businesses" to make money, but instead on the whole a class of individuals classed with gambling on high risk ventures that will have absolutely massive returns and they don't care if 90% of them fail and 9% flounder because the 1% that succeed bring in absolutely apeshit amounts of $$ when they are acquired by someone else.

Using things like github stars is clearly stupid, but not in the way you're suggesting. They're using the GH stars as a proxy metric for "someone else will come along and give money bags to this person later, so I should get in early so I can take that money eventually."

They're operating on metric of success which is about influence and charisma and connectedness, not revenue or technical excellence.

Again, VCs don't care if you'll make a profitable business some day. They're just interested in if someone else will come along and pay out giant bags of cash for it later in a liquidity event. If they get even one of those successes, all the stupid GH star watching pays off.

Here's another way of framing it: any harms from the false positives around "He has a lot of GH stars" or "He went to Stanford" or "I know his father at the country club" are more than mitigated by the one exit in 1000 that makes a bunch of people filthy rich.

We shouldn't expect VCs to be something they're not. But we are missing something inbetween VCs and "self financing" and "bootstrapping"

Comment by palmotea 19 hours ago

> Again, VCs don't care if you'll make a profitable business some day. They're just interested in if someone else will come along and pay out giant bags of cash for it later in a liquidity event. If they get even one of those successes, all the stupid GH star watching pays off.

And if that's true, they should be slapped, hard. They're no longer performing a socially useful function, and and have degraded towards pure financialization. Some middleman between fools and their money.

As much as I don't like Altman, VC should be pumping money into startups like Helios--companies pursuing cutting-edge technology that could totally fail (yes, that's an organic em-dash).

Comment by nradov 16 hours ago

You should start your own VC firm, solicit cash from LPs, and invest in companies like Helios. Go ahead, no one will stop you.

Comment by palmotea 4 hours ago

> You should start your own VC firm, solicit cash from LPs, and invest in companies like Helios. Go ahead, no one will stop you.

Why would I want to do that? I think they should be disciplined until the perform a more socially useful function. Competing with them is an entirely different thing that's unlikely to accomplish that goal.

Comment by wolvesechoes 3 hours ago

"Companies systematically operate in a way that negatively impacts the society" "Just start you own company bro"

This was, is, and always will be a most braindead response in such conversations. I bet you even use the word entrepreneur unironically.

Comment by cmrdporcupine 19 hours ago

I don't think there's ever been an argument that anybody in a free market capitalist economy has to perform a "socially useful function"?

I do think that ZiRP distorted things extremely badly. There's an entire generation in this software industry that lives around the business-culture expectations set during that time which as far as I could see basically amounted to "I build Uber but for X" (where X is some new business domain).

Perhaps after a bit of a painful interregnum things will be a bit different now that rates are higher and risk along with it.

Also anybody can throw a SaaS together in a few days now. Separating the wheat from the chaff in the next few years will be... interesting.

Comment by palmotea 19 hours ago

> I don't think there's ever been an argument that anybody in a free market capitalist economy has to perform a "socially useful function"?

That's a extremely strong statement, and may only be true in libertarian-land, where pure capitalism is a god to be worshiped and "good" has been redefined to be "whatever the unregulated free market does."

But in the real world, capitalism is a tool to perform socially useful functions (see the marketing about how it was better able to do that than Soviet central planning). When it fails, it should be patched by regulation (and often is) to push participants into socially useful actions, or at least discourage socially harmful ones.

Comment by cmrdporcupine 19 hours ago

How is it strong or controversial? It's the open ideology of the times.

I didn't say I agree with it.

Comment by palmotea 19 hours ago

> How is it strong or controversial? It's the open ideology of the times.

You said:

>>>> I don't think there's ever been an argument that anybody...

I just made a such an argument, and the fact that I'm not alone can be inferred from the actions of the government in regulating capitalism. Also, if you read the newspaper, it's fairly frequent to see an op-ed decrying some particular market entity, and advocating for something to stop what they're doing.

Also you'll note I wasn't arguing "everyone at all times needs to perform a socially-useful function," but rather "we've identified a particular important area where the social utility is too low, lets do something about that problem."

Comment by antonvs 15 hours ago

Helios? Altman? I think you mean Helion, the fusion company, not Helios (quantum computers). Either way, that’s a pretty hilarious example to use when you’re criticizing VC foolishness. Anyone doing due diligence wouldn’t go near those companies.

Spoiler alert: commercial fusion power may not be practical at all, but if it is, the timescales to delivery are measured in numbers closer to centuries than decades. Don’t fall for the hype generated to sucker those VCs you’re talking about.

Comment by palmotea 4 hours ago

> Helios? Altman? I think you mean Helion, the fusion company, not Helios (quantum computers). Either way, that’s a pretty hilarious example to use when you’re criticizing VC foolishness. Anyone doing due diligence wouldn’t go near those companies.

I actually don't care that much about the specifics, just the general direction: pumping money into developing technology that doesn't exist, instead of looking to flip yet another software startup where GitHub stars would matter.

Comment by biker142541 19 hours ago

“no longer”… lol

Comment by MASNeo 13 hours ago

Yes. Made my day!

Comment by groundzeros2015 17 hours ago

I know this isn’t quite your point. But for the portfolio approach to be plausible you have to play as if all of them will succeed, and only later sort out the failures.

If you mentally say “well 90% fail so I’ll just throw in this dog shit to see what happens” then you increase the failure rate.

Comment by groundzeros2015 13 hours ago

In other words, VCs should be taking a risk primarily by being very early. They should not be taking risks on low quality people or projects on the off chance that something good comes out.

Comment by cmrdporcupine 17 hours ago

Yeah I can see that.

Another thing I was thinking as I was re-reading this thread is that for some VCs the fact that you can game your GH star count might in fact read as a positive signal. It shows you're willing able to play a kind of vicious PR game to get popularity.

Again, VCs not interested in pure technical excellence or geek "cred". They likely want you if you're the kind of person who can stand in front of a room and puff yourself up and make yourself look more important than you are, and frankly "acquiring" GH stars might just be part of that.

I think it's awful, and I could never do that and my values wouldn't let me buy stars or lie about my projects.

Hence I've been working in this industry for 30 years this year, and I'm still a wage labourer.

Comment by groundzeros2015 14 hours ago

I think this is accurate. It’s a kind of promotion and commitment acumen.

Comment by mghackerlady 20 hours ago

hey shhh the ycomb gods might hear us

Comment by rtaylorgarlock 18 hours ago

I got downvoted on another post mentioning VCs. Confirmed everything I already knew about the sheeple

Comment by twic 20 hours ago

VCs are, traditionally, people who made a lot of money in a lottery and think that makes them experts. It's virtually guaranteed they're idiots.

Comment by calebkaiser 15 hours ago

I've worked on two open source infrastructure projects that raised money now, and am friends with people involved in many more. I'd put a couple of asterisks next to the claims in this article:

- VCs definitely cared about our Stars, especially in early stages, but not as our primary metric. I suppose Stars might be the primary metric if they're truly off the charts, but usually they're just one of many social proof signals an investor might look at.

- Investors, especially at the earliest stages, are quite a varied bunch. Some were diligent about looking at who was leaving Stars on the repo (i.e. are these accounts fake/do they belong to potential future customers). Some less so. This is true for basically every metric (see: startups that grossly misreport ARR)

- Fake GitHub stars were a thing way before 2022. I'd have to look in more detail at the methodology here, but I'd question any analysis that finds that paying for GitHub Stars (or any social following kind of metric) is a strictly post-2022 thing. Any metric that can be construed as social proof will immediately have its own grifter economy. Investors know this and (mostly) do their diligence.

Finally, showing numbers is hard for an early stage open source startup. At later stages, you should be able to show an actual business with typical metrics, but at the seed stage you often just have a repo and a website. Your goal is just to get a lot of people using your software. You can add telemetry to track that, but that's a thorny decision. GitHub Stars aren't a terrible proxy for popularity, provided that you audit the quality of the following. A project with a lot of organic stars and forks is, at the very least, a project that a lot of people are familiar with.

I'm not saying that GitHub Stars aren't wildly overvalued or gamed, but contextualized properly, they're a reasonable metric to consider, particularly at earlier stages. Most investors aren't just throwing millions at random repositories with 20k Stars from obviously spam accounts.

Comment by 20 hours ago

Comment by strangescript 20 hours ago

Same reason why two companies have the same idea, one goes viral and one doesn't. Public opinion matters even if its illogical at times.

Comment by comprev 14 hours ago

These days many DJs are booked on their social media following not their skills or musical knowledge.

The market is completely flooded and promotors cannot practically sift through the sheer volume of mixes published online -- so they go by Internet points instead.

Comment by ramesh31 19 hours ago

>Can anyone explain why on earth VC's are making actual investment decisions based on imaginary internet points?

Github stars used to really mean something. Having 1k+ was considered a stable, mature library being used in prod by thousands of people. At 10k+ you were a top level open source project. Now they've been gamed by the dead internet just like everything else, and it's depressing as hell.

Comment by tombert 14 hours ago

I've gotten recruiters reach out for jobs because of my fairly high Hacker News karma. This isn't speculation on my end, they actually told me that.

I agree it's idiotic; I'm quite confident that it wouldn't be that hard to cheat this system, and even if there absolutely no way to cheat the system, it's not like Hacker News points translate to smartness; my most upvoted posts have basically nothing to do with software engineering.

Comment by mkovach 21 hours ago

> This would be like an NFL team drafting a quarterback based on how many instagram followers

I believe that is how they made the final decision on Watson over Mayfield. Oh, wait, I don't think anything can explain that decision.

Also from Cleveland.

Go Guardians! Go Cavs!

Comment by jmyeet 13 hours ago

I agree with you but the funny thing about this comment is that fake Internet points is how decisions are made in many areas, including casting for TV shows and movies.

If you want to make it as an actor today, you need a social media following [1]. It is directly relevant to you getting cast. It also helps you connect with other actors, with producers and directors, etc.

Thing is, this isn't new. before social media, your influence was measured in "tear sheets" [2], basically any published story that you're in. This could be something as simple as going to Cannes or Sundance or even just to the hottest club.

Sports also uses a points system (kind of) but it's meant to reflect ability. Take the NFL, for example. Going from high school to college and college to the NFL, you will have stats relevant to whatever position(s) you play. For a QB it's things like interceptions, passing years, running yards, completed throw percentages, etc. You then have the NFL Combine [3]. This is an intensive camp where certain metrics are taken like how much you can lift, 40 yard dash, etc.

All of this tries to make it a science, or at least quantitative. But what I find funny is that despite all this work, it can still fail spectacularly. Like, being the #1 draft pick for the NFL is kind of a curse [4].

And then there's Tom Brady. For people unfamiliar with American sportsball, Tom Brady is arguably the greatest quarterback in the game's history, having 7 Superbowl rings. Thing is, he was a 6th round draft pick in the 2000 NFL draft. For anyone not familiar with what that means, 6th (and especially 7th) round draft picks are like the bottom of the barrel. You're not expected to take a starting position. You may not even play unless 1-2 people get injured. Nobody expects you to be a great.

[1]: https://www.backstage.com/magazine/article/social-media-acto...

[2]: https://avenueagency.wordpress.com/tag/tear-sheet

[3]: https://www.nfl.com/combine

[4]: https://www.si.com/more-sports/2011/01/13/sportscasting-exce...

Comment by zby 18 hours ago

How do you check if a github repo is 'good'?

Comment by next_xibalba 19 hours ago

> Can anyone explain why on earth VC's are making actual investment decisions based on imaginary internet points?

It's purely incentives. Heavy competition for early signal identification has pushed them to crappier and crappier indicators.

Comment by copperroof 18 hours ago

Vcs are some of the dumbest people I have ever interacted with. Full stop no exceptions. They are luck, confirmation bias and survivor bias weaponized.

Comment by AndrewKemendo 21 hours ago

When I took over MLAgents at the end of 2021, before Unity fully shit themselves after the insane Weta acquisition, the main metric they were using to promote the project internally were github stars

Yes actually

Needless to say they didn’t like when I said this was a worthless metric and we needed to be using something like “working policies” or “time saved training”

Comment by jordemort 21 hours ago

should have bought a few thousand stars on the internet and then taken the promotion to director

Comment by AndrewKemendo 20 hours ago

I was already a director and didn’t need the promotion

I just wanted to build a good product but unfortunately good products are not relevant

Comment by sidewndr46 20 hours ago

For those of us that don't understand what was so insane about the Weta acquisition?

Comment by AndrewKemendo 20 hours ago

It never made sense. Weta tools don’t work with Unity at all

There were no complementary workflows or infrastructure or anything.

It was explicitly a move to try to counter epic’s positioning and internally it was very obviously a JR versus Tim pissing contest (and JR was the only one in the contest because Tim didn’t give a fuck about Unity)

Comment by esseph 20 hours ago

> Can anyone explain why on earth VC's are making actual investment decisions based on imaginary internet points?

I have personally seen several company CEOs (that were billionaires!) do this in different ways. Sometimes hiring people because of it.

Comment by ernst_klim 23 hours ago

I think people expect the star system to be a cheap proxy for "this is a reliable piece of sorfware which has a good quality and a lot of eyes".

I think as a proxy it fails completely: astroturfing aside stars don't guarantee popularity (and I bet the correlation is very weak, a lot of very fundamental system libraries have small number of stars). Stars also don't guarantee the quality.

And given that you can read the code, stars seem to be a completely pointless proxy. I'm teaching myself to skip the stars and skim through the code and evaluate the quality of both architecture and implementation. And I found that quite a few times I prefer a less-"starry" alternative after looking directly at the repo content.

Comment by onion2k 23 hours ago

given that you can read the code, stars seem to be a completely pointless proxy

Imagine you're choosing between 3 different alternatives, and each is 100,000 LOC. Is 'reading the code' really an option? You need a proxy.

Stars isn't a good one because it's an untrusted source. Something like a referral would be much better, but in a space where your network doesn't have much knowledge a proxy like stars is the only option.

Comment by ernst_klim 22 hours ago

> Is 'reading the code' really an option? You need a proxy.

100k is small, but you're right, it can be millions. I usually skim through the code tho, and it's not that hard. I don't need to fully read and understand the code.

What I look at is: high-level architecture (is there any, is it modular or one big lump of code, how modular it is, what kind of modules and components it has and how they interact), code quality (structuring, naming, aesthetics), bus factor (how many people contribute and understand the code base).

Comment by nitwit005 15 hours ago

It's too much work, so you decide to trust the opinion of someone else who probably also hasn't done the work.

Comment by readthedangcode 22 hours ago

Ask Claude to help. Read the dang code. You'll be more confident in your decision and better positioned to handle any issues you encounter.

Comment by hgoel 21 hours ago

I don't think I have ever even considered using star count as a factor for picking from alternatives.

Looking at the commit history, closed vs open issues and pull requests provides a much more useful signal if you can't decide from the code.

Comment by lukan 22 hours ago

The issues page used to be good for this as well. What kind of problems people are having.

(Sometimes still is, but the agents garbage does not help)

Comment by SkyBelow 17 hours ago

Could it be that stars were a good proxy, but as people realized such, they started being gamed, resulting in them becoming a bad proxy? Goodhart's Law would seem to always be in play for any proxy, because once it is recognized as a good proxy, bad actors will begin gaming it. A proxy that can't be gamed would be ideal, but I feel that is akin to wishing for the Philosopher's Stone.

Comment by cortesoft 19 hours ago

The VCs looking to invest would naturally care more about popularity than quality, because popularity would be how you make sales.

Comment by dafi70 1 day ago

Honest question: how can VCs consider the 'star' system reliable? Users who add stars often stop following the project, so poorly maintained projects can have many stars but are effectively outdated. A better system, but certainly not the best, would be to look at how much "life" issues have, opening, closing (not automatic), and response times. My project has 200 stars, and I struggle like crazy to update regularly without simple version bumps.

Comment by 3form 1 day ago

The stars have fallen to the classic problem of becoming a goal and stopping being a good metric. This can apply to your measure just as well: issues can also be gamed to be opened, closed and responded to quickly, especially now with LLMs.

Comment by sunrunner 1 day ago

Was it ever a good metric? A star from another account costs nothing and conveys nothing about the sincerity, knowledge, importance or cultural weight of the star giver. As a signal it's as weak as 'hitting that like button'.

If the number of stars are in the thousands, tens of thousands, or hundreds of thousands, that might correlate with a serious project. But that should be visible by real, costly activity such as issues, PRs, discussion and activity.

Comment by noosphr 23 hours ago

There was a time when total number of hyperlinks to a site was an amazing metric measuring its quality.

Comment by embedding-shape 22 hours ago

Yeah, the time between Google appeared, until the time SEO became a concept people chased, a very brief moment of time.

Comment by kang 23 hours ago

at that time having a website took work, while having a github account can be cheaply used to sybil attack/signal marketing

Comment by noosphr 12 hours ago

Forums, news groups and mailing lists counted towards pagerank in the early days.

Comment by 3form 1 day ago

There isn't just "good metric" in vacuum - it was a good metric of exactly the popularity that you mentioned. But stars becoming an object of desire is what killed it for that purpose. Perhaps now they are a "good metric" of combined interest and investment in the project, but what they're measuring is just not useful anymore.

Comment by sunrunner 23 hours ago

Yeah, I'd agree with this. I always thought of a star indicating only that a person (or account, generally) had an active interest in another project, either through being directly related or just from curiosity. Which can sort of work as a proxy for interesting, important or active, but not accurately.

Comment by daemonologist 17 hours ago

I remember talking to some of the folks running UIUC's hackathon (probably ten years ago) and they'd built a sort of page-rank for Github - hand-identifying the most prominent and reputable projects/individuals and then using follows and stars to transfer that reputation. I don't know how well it worked in practice or if it was every published, but it might be more effective than pure star count.

(This was for admissions iirc - they had limited slots and a portion of them were allocated to people with a strong github rank.)

Comment by 23 hours ago

Comment by einpoklum 23 hours ago

A repository with zero stars has essentially no users. A repository with single-stars has a few users, but possibly most/all are personal acquiantances of the author, or members of the project.

It is the meaning of having dozens or hundreds of stars that is undermined by the practice described at the linked post.

Comment by amonith 23 hours ago

I especially love issues automatically "closed due to inactivity" just to keep the number of issues down :V

Comment by alaudet 22 hours ago

Sometimes people open issues without proper information. It cant be replicated and nobody else is jumping in that it affects them. You may suspect its something else, maybe with their environment, but if they don't engage what else can you do? Tell them you are closing it and specify what kind of info you need if they ever get around to providing it and it can be reopened.

Comment by jkrejcha 11 hours ago

"Unable to reproduce" is a fair enough explicit close reason. This is more about those "stale" bots that exist that just kinda close the issues because there hasn't been any response for X days. The annoyance with the practice usually stems from the fact that many of the victims of this comes from a lack of maintainer response.

This sort of bot punishes users for making even valid reports that aren't fixed immediately or missed by the maintainers for whatever reason including transitory ones, etc.

Constantly bumping threads/issues/whatever is generally considered rude, so this is why issue reporters generally don't do it, plus generally the reporter isn't solely focused on that particular issue

Comment by werdnapk 22 hours ago

And sometimes the maintainer simply doesn't respond to a perfectly acceptable issue due to either the maintainer abandoning the project, not enough maintainers or simple neglect.

Comment by test1235 1 day ago

"When a measure becomes a target, it ceases to be a good measure"

https://en.wikipedia.org/wiki/Goodhart%27s_law

Comment by Aurornis 14 hours ago

> Honest question: how can VCs consider the 'star' system reliable?

They don't.

I've helped with due diligence on a couple projects. VCs know that metrics can be gamed because they see it all the time. Stars, followers, views, clicks, likes. A portion of entrepreneurs have been gaming every metric since before you and I learned how to program. It has always been this way and always will.

Most of the VC-related comments have interpreted this article to mean that VCs are so dumb that they haven't realized that stars can be faked, but in reality VCs spend so much time sorting through fake metrics that they understand this probably better than most here.

If you've ever gone through due diligence for an acquisition or big investment round it's amazing how much work you have to do in order to prove that your metrics are real. When things got crazy after COVID there was a short time when VCs were trying to move so fast that they skipped this, but it resulted in some high profile fraud cases.

During normal times, you will get grilled on metrics. They might see stars as a signal for rising stars, but they're not throwing money at projects based on star count like many commenters assume. They will do a deeper dive before investing and they will call it off if things aren't adding up. The amount of diligence scales with the investment, so someone getting a $10K check can get away with a lot of fraud but that $2mm funding round isn't going to cross the finish line based on star count.

Comment by HighlandSpring 1 day ago

I wonder if there's a more graph oriented score that could work well here - something pagerank ish so that a repo scores better if it has issues reported by users who themselves have a good score. So it's at least a little resilient to crude manipulation attempts

Comment by 3form 1 day ago

It would be more resilient indeed, I think. Definitely needs a way to figure out which users should have a good score, though - otherwise it's just shifting the problem somewhat. Perhaps it could be done with a reputation type of approach, where the initial reputation would be driven by a pool of "trusted" open source contributors from some major projects.

That said, I believe the core problem is that GitHub belongs to Microsoft, and so it will still go more towards operating like a social network than not - i.e. engagement matters. It will still take a good will to get rid of Social Network Disease at scale.

Comment by az226 23 hours ago

Reputation doesn’t equal good taste in judging other projects.

There are much better ways of finding those who have good taste.

Comment by mech422 14 hours ago

Sounds like 'advogato' for github... Social trust metrics

https://web.archive.org/web/20170715120119/http://advogato.o...

Comment by az226 23 hours ago

GitHub has all kinds of private internal metrics that could update the system to show a much higher signal/quality score. A score that is impervious to manipulation. And extremely well correlated with actual quality and popularity and value, not noise.

Two projects could look exactly the same from visible metrics, and one is complete shell and the other a great project.

But they choose not to publish it.

And those same private signals more effectively spot the signal-rich stargazers than PageRank.

Comment by JimDabell 22 hours ago

You are looking for different things to VCs. You are looking for markers that show software quality over the long-term. They are looking for markers that show rapidly gaining momentum over the short-term. These are often in opposition to one another.

Comment by mapmeld 22 hours ago

Making the conversations about VCs expecting thousands of stars, is thinking too big. It's probably more often someone pays $20 to make one of their projects look good, for their CV, for vanity, thinking this will get them the push that they need to get clicks on reddit, or noticed over some other open source project. If there is someone offering a 10k star project an investment over 8k without looking at the project or revenue potential, I can only think they are clueless, or picking a student project to fund each summer.

The fake accounts often star my old repos to look like real users. They are usually very sketchy if you think for a minute, for example starring 5,000 projects in a month and no other GitHub activity. One time I found a GitHub Sponsor ring, which must be a money laundering / stolen credit cards thing?

Comment by hobofan 23 hours ago

Unless something has changed in the last ~3 years, I think the article vastly overstates the credibility with VC's.

Even 10 years ago most VCs we spoke to had wisened up and discarded Github stars as a vanity metric.

Comment by evilsocket 21 hours ago

Agree that sophisticated funds don't, but the ecosystem hasn't caught up. StarHub/GitStar pricing pages still sell to "seed-stage founders pre-fundraise"

Comment by az226 23 hours ago

Much more important is who starred it. And are they selective about giving out stars or bookmarking everything. Forks is a closer signal to usage than stargazing.

Comment by whilenot-dev 21 hours ago

Indeed, GitHub should set up a monthly quota for available stars to give and correlate the account age with it: either make something up like a "trusted-age-factor" that multiplies any given star by that factor, or scale the available quota accordingly by that factor (and let users star repos repeatedly).

GitHub should also introduce a way to bookmark a repo, additional to the existing options of sponsor/watch/fork/star-ing it.

Comment by foresterre 1 day ago

With the advent of AI, these "life" events are probably even simpler to fake than AI though, and unlike the faking of stars not against the ToS.

Comment by askl 1 day ago

Stars are a simple metric even someone like a VC investor can understand. Your "better system" sounds far too complicated and time consuming.

Comment by faangguyindia 1 day ago

because VC don't care about anything being legitimate, if it can fool VCs it can also fool market participants, then VC can profit off of it.

one VC told me, you'll get more funding and upvotes if u don't put "india" in your username.

Comment by ethegwo 23 hours ago

Many VCs are only doing one thing: how to use some magical quantitative metrics to assess whether a project is reliable without knowing the know-how. Numbers are always better than no numbers.

Comment by dukeyukey 23 hours ago

Honestly I don't know if that's true. Picking up on vibes might be better than something like GitHub stats.

Comment by ethegwo 23 hours ago

When a partner decides to recommend a startup to the investment committee, he needs some explicit reasons to convince the committee, not some kind of implicit vibe

Comment by dukeyukey 22 hours ago

But given the amount of astroturfing and star-buying out there, relying on star counts may well select for deceptive founders.

Comment by ethegwo 21 hours ago

Yes, I think VCs have already switched to using other metrics that are less easy to fake, such as download per month or customer interviews (or more direct, ARR, even for really early stage startups). I just want to explain the background reason of it.

Comment by Se_ba 1 day ago

This is a good idea, but from my experience most VCs (I’m not talking about the big leagues) aren’t technical, they tend to repeat buzzwords, so they don’t really understand how star systems works.

Comment by csomar 23 hours ago

Because VCs love quantifiable metrics regardless of how reliable they actually are. They raise money from outside investors and are under pressure to deploy it. The metrics give them something concrete to justify their thesis and move on with their life.

Comment by logicallee 1 day ago

>Honest question: how can VCs consider the 'star' system reliable?

Founders need the ability to get traction, so if a VC gets a pitch and the project's repo has 0 stars, that's a strong signal that this specific team is just not able to put themselves out there, or that what they're making doesn't resonate with anyone.

When I mentioned that a small feature I shared got 3k views when I just mentioned it on Reddit, then investors' ears perked right up and I bet you're thinking "I wonder what that is, I'd like to see that!" People like to see things that are popular.

By the way, congrats on 200 stars on your project, I think that is definitely a solid indicator of interest and quality, and I doubt investors would ignore it.

Comment by scotty79 23 hours ago

> how can VCs consider the 'star' system reliable

I think VCs just know that there are no reliable systems, so they go with whatever's used.

Comment by art_mach 20 hours ago

Yeah, I was pondering a few months away when I checked Pathway, an ETL solution. I've never heard about it but I saw some news that they have created a better model than transformer. So stats:

- link: https://github.com/pathwaycom/pathway

- watch: 115, fork: 1.6k, star: 63.5k

- issues: 32, PR-s: 3

And compare to other ETL tool, like Apache Airflow - used by me and many machine learning folks:

- link https://github.com/apache/airflow

- watch: 777, forks 16.9k!!!!!, Stars: (only!) 45.1k

- issues: 1200 (!!!), PR-s (501!!!).

Comment by arbot360 14 hours ago

Clearly this means new SaaS companies need to start buying forks, issues and PRs for their repos now too.

Comment by panabee 19 hours ago

VCs are soccer stars, but founders play basketball.

It’s easy to dunk on VCs, but the herd effect is rational after considering the typical VC’s background, the intense competition for good deals, and the job requirements — to prudently deploy capital.

Who wants to pitch their boss on investing $1-10M in a product no one uses, built by a team of anons?

This is not to defend the process, but merely explain it. It’s not so different from customer marketing. To win a VC, first understand the VC.

Once hired, VCs cannot easily get fired yet they exert immense strategic control.

Nonetheless, many founders interview summer interns harder than VCs.

Heuristic: after removing capital, would you hire the VC to be your boss?

Great VCs are worth the equity and will turbocharge startups. When you find one, don't haggle. Get a fair deal, and get right back to coding.

Bad VCs will destroy companies the same way soccer stars would destroy basketball teams if made the head coach.

Comment by lkm0 1 day ago

We're this close to rediscovering pagerank

Comment by TheTaytay 22 hours ago

I was literally was just looking at GitHub dataset availability and musing on this. A star from karpathy is worth a lot more than a star from open_claw_dood that just created his account 5 min ago.

In general, I’ve been dissatisfied with GitHub’s code search. It would be nice to see innovation here.

Comment by ricardo81 22 hours ago

It'd ideally be more of a peoplerank though. I think Google discovered this problem themselves when Pagerank became a well known thing.

You'd want to discard a lot of the noise in the bottom 20% of linking power. You want to focus more on the 'trust' factor.

Comment by NooneAtAll3 18 hours ago

I remember long ago watching Tom Scott (iirc) video about him buying facebook impressions once

you instantly got like 40k likes - but there was a catch

algorithm saw you getting a lot of likes from Iran/Pakistan, so went on recommending the post to those countries, got no response and stopped recommending said post altogether

in a sense, it became a self-regulating system, where fake impressions extinguish their very reason to be bought

Comment by mentalgear 22 hours ago

> VC funding pipeline that treats GitHub popularity as proof of traction

Why am I not surprised big Capital corrupts everything. Also, Goodhart's law applies again: "When a measure becomes a target, it ceases to be a good measure".

HN Folks: What reliant, diverse signals do you use to quickly eval a repo's quality? For me it is: Maintenance status, age, elegance of API and maybe commit history.

PS: From the article:

> instead tracks unique monthly contributor activity - anyone who created an issue, comment, PR, or commit. Fewer than 5% of top 10,000 projects ever exceeded 250 monthly contributors; only 2% sustained it across six months.

> [...] recommends five metrics that correlate with real adoption: package downloads, issue quality (production edge cases from real users), contributor retention (time to second PR), community discussion depth, and usage telemetry.

Comment by consp 20 hours ago

Overly verbose and "glitter" readme.md files is a good indicator of bad projects or at least projects which need more attention to be used. It's too often pre-rugpull or look-at-me repos where better solutions are one click away.

Finding any curse words in hidden comments in the commit history is for me a good indication of a human working on a passion project, though ymmv.

And there are always exceptions to the exception of the exceptions.

Comment by duzer65657 21 hours ago

I tend to look at other people involved, like contributors but not just the volume but actual people and their other activity. If the original author is still around and active that tends to be a good sign IME

Comment by readthedangcode 22 hours ago

I usually just read the dang code.

Comment by gslin 23 hours ago

* https://dagster.io/blog/fake-stars (2023) - Tracking the Fake GitHub Star Black Market with Dagster, dbt and BigQuery

* https://arxiv.org/abs/2412.13459 (2024/2025) - Six Million (Suspected) Fake Stars in GitHub: A Growing Spiral of Popularity Contests, Spams, and Malware

Comment by aledevv 23 hours ago

> VCs explicitly use stars as sourcing signals

In my opinion, nothing could be more wrong. GitHub's own ratings are easily manipulated and measure not necessarily the quality of the project itself, but rather its Popularity. The problem is that popularity is rarely directly proportional to the quality of the project itself.

I'm building a product and I'm seeing what important is the distribution and comunication instead of the development it self.

Unfortunately, a project's popularity is often directly proportional to the communication "built" around it and inversely proportional to its actual quality. This isn't always the case, but it often is.

Moreover, adopting effective and objective project evaluation tools is quite expensive for VCs.

Comment by ozgrakkurt 23 hours ago

Vast majority of mid level experienced people take stars very seriously and they won't use anything under 100 stars.

I'm not supporting this view but it is what it is unfortunately.

VCs that invest based on stars do know something I guess or they are just bad investors.

IMO using projects based on start count is terrible engineering practice.

Comment by tylergetsay 22 hours ago

I've seen the same devs refuse to use a library because the last commit was 3 months ago, despite the library being extremely popular, battle tested, and existing for 10 years.

Comment by aledevv 23 hours ago

also and above all because it can be easily manipulated, as the research explained in the article actually demonstrates

Comment by criddell 21 hours ago

> measure not necessarily the quality of the project itself, but rather its Popularity

Surely a project's popularity is often related to its utility. A useful and popular project seems like exactly the kind of thing a VC might be interested in.

Comment by williamdclt 23 hours ago

Well, pretty sure that VCs are more interested in popularity than in quality so maybe it's not such a bad metric for them.

Comment by aledevv 23 hours ago

Yes, you're right, but popularity becomes fleeting without real quality behind the projects.

Hype helps raise funds, of course, and sells, of course.

But it doesn't necessarily lead to long-term sustainability of investments.

Comment by mlpotato 21 hours ago

I wonder if it makes sense for GitHub to use graph-theoretic measures like PageRank instead of raw stars. In simple terms, a repo is considered important if it is starred or forked by GitHub users who maintain other important repos.

It’s more expensive to compute, but the resulting scores would be more trustworthy unless I’m missing something.

Comment by mankins 21 hours ago

That sounds closer to achieving a good outcome. Of course I think anything that includes the set of all users as columns will be game-able. You need to either choose the set yourself from "trusted peers" or "foaf" degrees, or maybe better use retroactive signals rather than purely like-driven approaches.

Comment by tsylba 22 hours ago

Personally I use stars in two ways: 1) It's interesting and I want to keep track of it for possible future use and 2) It's a fantastic idea and kudos to you even if I'll never use it.

As a side note it's kind of disheartening that everytime there is a metric related to popularity there would be some among us that will try to game it for profit, basically to manipulate our natural bias.

As a side note it's always a bit sad how the parasocial nature of the modern web make us like machine interfacing via simple widgets, becoming mechanical robot ourselves rationalising IO via simple metrics kind of forgetting that the map is never the territory.

Comment by apples_oranges 1 day ago

I look at the starts when choosing dependencies, it's a first filter for sure. Good reminder that everything gets gamed given the incentives.

Comment by msdz 1 day ago

> I look at the starts when choosing dependencies, it's a first filter for sure.

Unfortunately I still look at them, too, out of habit: The project or repo's star count _was_ a first filter in the past, and we must keep in mind it no longer is.

> Good reminder that everything gets gamed given the incentives.

Also known as Goodhart's law [1]: "When a measure becomes a target, it ceases to be a good measure".

Essentially, VCs screwed this one up for the rest of us, I think?

[1] https://en.wikipedia.org/wiki/Goodhart%27s_law

Comment by yuppiepuppie 23 hours ago

> The project or repo's star count _was_ a first filter in the past, and we must keep in mind it no longer is.

Id suggest the first question to ask is "if the project is an AI project or not?" If it is, dont pay attention to the stars - if it's not, use the stars as a first filter. That's the way I analyse projects on Github now.

Comment by GrinningFool 22 hours ago

> The project or repo's star count _was_ a first filter in the past, a

I agree that it has been a first filter, but should it ever have been? A star only says that someone had a passing interest in a project. Not significantly different from a 'like' on a social media post.

Comment by moffkalast 1 day ago

Average case of "once a measure becomes a target".

Comment by Lapel2742 1 day ago

I do not look at the stars. I look at the list of contributors, their activities and the bug reports / issues.

Comment by est 1 day ago

> I look at the list of contributors

Specifically if those avatars are cute animie girls.

Comment by tomaytotomato 23 hours ago

> Specifically if those avatars are cute anime girls.

I know you are half joking/not joking, but this is definitely a golden signal.

Comment by GaryBluto 23 hours ago

Positive or negative to you? Whenever I see more than one anime-adjacent profile picture I duck out.

Comment by pezgrande 22 hours ago

Positive ofc, most of them a top-tier Rust devs.

Comment by mrweasel 23 hours ago

Yeah, I didn't think anyone would place any actual value on the stars. It almost doesn't need to be a feature, because what is it suppose to do exactly?

Comment by elashri 23 hours ago

I usually use stars as a bookmark list to visit later (which I rarely do). I probably would need to stop doing that and use my self-hosted "Karkeep" instance for github projects as well.

Comment by QuantumNomad_ 23 hours ago

Never heard of it before.

https://github.com/karakeep-app/karakeep

Sounds useful.

I’ll star it and check it out later ;)

Comment by anant-singhal 1 day ago

Seen this happen first-hand with mid-to-large open source projects that sometimes "sponsor" hackathons, literally setting a task to "star the repo" to be eligible.

It’s supposed to get people to actually try your product. If they like it, they star it. Simple.

At that point, forcing the action just inflates numbers and strips them of any meaning.

Gaming stars to set it as a positive signal for the product to showcase is just SHIT.

Comment by pascal-maker 20 hours ago

Very simply, you need to see VCs as branding companies who give people with many followers brand deals. VCs think that if you have a lot of stars, you must have hit product-market fit — or something close — because many developers have started using your open-source tool. This isn’t necessarily always the case. Every weekend project from Andrej Karpathy gets loads of stars because he is the most famous person on GitHub. What I’ve noticed a lot is that the repos with the most stars most of the time already came from big companies open-sourcing their tools, or people building free versions of paid software.

Comment by Silamoth 19 hours ago

> Andrej Karpathy…is the most famous person on Git Hub

Is he really? I’ve only heard of him because HN is obsessed with his “AI” takes. Is he really that popular outside of this bubble?

Comment by pascal-maker 19 hours ago

If you asked a typical person outside of San Francisco or Silicon Valley, nine times out of ten they wouldn’t have a clue who he is. However, as a co-founder of OpenAI and the former Director of AI at Tesla, he is widely known and respected in the tech world—especially for coining the term 'vibecoding.'" He comes in as second on the list of Github Users Global Ranking right behind mister linux: https://wangchujiang.com/github-rank/.

Comment by halamadrid 14 hours ago

Buying stars explicitly is one mechanism. Another one is running Hackathons in India or lower cost countries with a prize, which is qualified by "Starring" said repo.

Easily 1-3k stars per hackathon from student or hackathon participants for a cost of $1-5k. And some free marketing comes with too since participants may post on LinkedIn or other social media if they win something.

Comment by socketcluster 23 hours ago

My project https://github.com/socketCluster/socketcluster has been accumulating stars slowly but steadily over about 13 years. Now it has over 6k stars but it doesn't seem to mean much nowadays as a metric. It sucks having put in the effort and seeing it get lost in a sea of scams and seeing people doubting my project's own authenticity.

It does feel like everything is a scam nowadays though. All the numbers seem fake; whether it's number of users, number of likes, number of stars, amount of money, number of re-tweets, number of shares issued, market cap... Maybe it's time we focus on qualitative metrics instead?

Comment by glouwbug 10 hours ago

That’s okay. I’m there with you too with about the same cumulative.

I measure my own projects by the enjoyment I got out of them. No sense in chasing validation from others when ones only metric will forever be what’s in their own control.

Comment by mercurialsolo 21 hours ago

15 mins into this - Built this to identify the fraudsters https://github.com/mercurialsolo/realstars

We should do a hall of shame!

Comment by cmrdporcupine 19 hours ago

Can you vibe code up a firefox plugin, too?

Comment by therepanic 21 hours ago

It's a pity that no one will ever see this 15-minute slop.

Comment by tonmoy 21 hours ago

Steam wishlist, itch.io number of views, YouTube views and now GitHub stars… I’m tired of all the gamification of creativity. Now if you’d upvote my comment I can get same karma please and thank you

Comment by whilenot-dev 20 hours ago

Besides the ability to downvote comments after passing a threshold of 500, what else is HN karma good for?

Comment by tonmoy 20 hours ago

Who knows? The way things are going VCs could start asking claude to review someone’s online presence and Claude might decide karma is the best metric for that

Comment by whilenot-dev 17 hours ago

I don't mean hypothetical things... are there any more thresholds that unlock functionality?

Comment by Topfi 1 day ago

I don't know what is more, for lack of a better word, pathetic, buying stars/upvotes/platform equivalent or thinking of oneself as a serious investor and using something like that as a metric guiding your decision making process.

I'd give a lot of credit to Microsoft and the Github team if they went on a major ban/star removal wave of affected repos, akin to how Valve occasionally does a major sweep across CSGO2 banning verified cheaters.

Comment by luke5441 1 day ago

The problem is that if this is the game now, you need to play it. I'm trying to get a new open source project off the ground and now I wonder if I need to buy fake stars. Or buy the cheapest kind of fake stars for my competitors so they get deleted.

For Microsoft this is another kind of sunk cost, so idk how much incentive they have to fix this situation.

Comment by 16 hours ago

Comment by superdisk 1 day ago

An open source project really shouldn't be something you need to "get off the ground." If it provides value then people will naturally use it.

Comment by luke5441 23 hours ago

How do people know it exists to solve their problem? Even before LLMs it was hard to get through VC funded marketing by (commercial) competitors.

My first Open Source project easily got off the ground just by being listed in SourceForge.

Comment by wazHFsRy 16 hours ago

How will fake stars help it getting of the ground?

Comment by luke5441 9 hours ago

My point is that not having fake stars may prevent you from gaining traction.

Organic users still have to consider it, but then they might not dismiss it outright because it has five stars or something.

Comment by mariusor 23 hours ago

Haha, have you tried that? I think in this day and age marketing is much needed activity even for open-source projects providing quality solutions to problems.

Comment by superdisk 23 hours ago

I maintain a niche-popular project that I didn't do any marketing for. My understanding is that even for popular projects, the usual dynamic is that there's just one guy doing all the work. So "getting off the ground" just means getting people to use it, and there shouldn't be any reason to artificially force that.

Comment by tonyedgecombe 22 hours ago

It depends what your objective is. Many people seem to see their open source projects as a stepping stone into some commercial activity. Putting aside whether that is a good idea or not if that is what they want to do then they will need to market in some way.

Comment by Topfi 23 hours ago

The issue with that is, it's a game that never ends. Now you need to inflate your npm/brew/dnf installs, then your website traffic to not make it to obvious, etc.

I am not successful at all with my current projects (admittedly am not trying to be nowadays), so feel free to dismiss this advice that predates a time before LLM driven development, but in the past, I have had decent success in forums interacting with those with a specific problem my project did address. Less in stars, more in actual exchange of helpful contributions.

Comment by Miraltar 1 day ago

Citing Valve as a model for handling cheating is not what I'd have reached for.

Comment by Topfi 1 day ago

Honest question, which companies handle the process better given it is a trade-off? Yes, VAC is not as iron-clad as kernel level solutions can be, but the latter is overly invasive for many users. I'd argue neither is the objectively right or better approach here and Valves approach of longer term data collection and working on ML solutions that have the potential to catch even those cheating methods currently able to bypass kernel level anti-cheat is a good step.

On Github stars, I'd argue they are the most suitable comparison, as all the funny business regarding stars should be, if at all, detectable by Github directly and ideally, bans would have the biggest deterrent effect, if they happened in larger waves, allowing the community to see who did engage in fraudulent behaviour.

Comment by 17 hours ago

Comment by Olshansky 16 hours ago

Literally posted about this but w.r.t to agent-skills yesterday: https://olshansky.substack.com/p/why-every-developer-needs-t...

Need to move from skill downloads to skill usage.

Comment by pbjerkeseth 18 hours ago

This has unfortunately been going on for years at this point, for as long as there has been an OSS-to-profitability pipeline gamed for startups I'd guess. I wouldn't be surprised if it has progressed to fake contributors/discussions/issues/forks as well. Seems like an inevitable outcome for any platform with social signals.

Comment by ricardo81 22 hours ago

Same old story of centralised algorithms being abused.

Github stars is akin to 'link popularity' or 'pagerank' which is ripe for abuse.

One way around it is to trust well known authors/users more. But it's hard to verify who is who. And accounts get bought/closed/hacked.

Another way is to hand over the algo in a way where individuals and groups can shape it, so there's no universal answer to everyone.

Comment by hnmullany 22 hours ago

I came across one of these in 2018 with a "hot" open source company raising a Series B. An impressive star ramp (about 300% YoY growth) before the (high-priced/competitive) raise and three months later Github had revoked almost all the star growth from the previous year, resulting in a 20% YoY record. The company eventually got acquihired.

Comment by fr3on 13 hours ago

Stars measure attention. Packagist downloads measure automation. Neither measures trust. The only signal that's hard to fake is: does something real depend on this?

Comment by winddude 18 hours ago

"median star count at seed is 2,850" lower than I would have expected, but than again, I've maybe only had 20+ stars on my repos

- I think the zero follower account might be the weakest signal of a low quality account, I think I had zero followers for maybe 5+ years.

Comment by talsania 1 day ago

Seen this firsthand, repos with hundreds of stars and zero meaningful commits or issues. In hardware/RTL projects it's less prominent.

Comment by tiffanyh 21 hours ago

For all the hate on star systems, whether it’s GitHub/Amazon/AppStore, I’d still take having one over having nothing at all.

They make it easier to sort through options, help with search and discovery, and at least give you a baseline signal for trust can get better over time.

So to me, some signal better than no signal at all.

Comment by simultsop 16 hours ago

We need our spokesperson to speak about this!

https://www.youtube.com/@programmersarealsohuman5909

Comment by spocchio 23 hours ago

I think the reason is that investors are not IT experts and don't know better metrics to evaluate.

I guess it's like fake followers on other social media platforms.

To me, it just reflects a behaviour that is typical of humans: in many situations, we make decisions in fields we don't understand, so we evaluate things poorly.

Comment by cuttothechase 15 hours ago

As one commenter put it: "You can fake a star count, but you can't fake a bug fix .. "

The way to beautify the pig is to put lipstick on the pig!

Comment by frabonacci 10 hours ago

what's even more alarming is how exploitable GitHub Trending itself is these days. you can get the star count to fork ratio right and you land on the front page, which then pulls in real organic stars

Comment by Cider9986 19 hours ago

I hope this doesn't mean they will make it harder to create GitHub accounts. Have you ever tried to create a facebook account recently? Every time I've tried they demanded a facescan.

Comment by Silamoth 19 hours ago

Genuine question: Who uses stars on GitHub? Even if I use a library or tool, it’s never once occurred to me to give it a star on GitHub. Is this a real thing people do? And if so, why?

Comment by edm0nd 14 hours ago

I star something that I think is cool and also so I can find it easier if I forget the name of it I can just go look at my Stars and re-find it.

Comment by wazHFsRy 16 hours ago

I guess the idea is to show a small token of appreciation. I do that and I am also happy if I receive some on my own repos.

Comment by abelanger 17 hours ago

> Our analysis revealed the fork-to-star ratio as the strongest simple heuristic for identifying potential manipulation. The logic is straightforward: a star costs nothing and conveys no commitment. A fork means someone downloaded the code to use or modify it.

This is just clearly...incorrect? You can both modify code without forking it and most software is distributed via a registry or binary download, which also wouldn't be represented in forks. For most projects, the number of forks is a lossy signal for how busy the contributor ecosystem is, nothing else.

Comment by marsulta 17 hours ago

Anyone else getting traffic numbers that are weird? I will see something like 300 clones yesterday, but i don't think that is real at all.

Comment by nottorp 23 hours ago

Why is zero public repos a criteria?

I paid github for years to keep my repos private...

But then I don't participate in the stars "economy" anyway, I don't star and I don't count stars, so I'm probably irrellevant for this study.

Comment by Topfi 23 hours ago

Am very much the same, took a bunch private two years ago for multitude of reasons. I can, however, see why no public repos could be a partial indicator and of concern, in conjunction with sudden star growth, simply because it is hard for a person with no prior project to suddenly and publicly strike gold. Even on Youtube it is a rare treat to stumble across a well made video by a small channel and without algos to surface repos on Github in the same way, any viral success from a previously inactive account should be treated with some suspicion. Same the other way, if you never made any PR, etc. sudden engagement is a bit odd.

Comment by nottorp 23 hours ago

I think they're using it as a signal for the accounts doing the starring, not the account being starred...

Comment by mvvl 21 hours ago

Tbh, for me there’s basically no difference between a repo with 2k stars and one with 20k.

Stars only matter when there are very few, like if it has almost none, that’s a red flag. Otherwise it’s just noise.

Comment by sixthDot 16 hours ago

Comment by 9cb14c1ec0 21 hours ago

Github could easily crack down on this. Spend $10 at each star provider, then ban all accounts involved. A tiny bit of money could create a huge drag on the ecosystem.

Comment by AKSF_Ackermann 1 day ago

So, if star to fork ratio is the new signal, time to make an extra fake star tier, where the bot forks the repo, generates a commit with the cheapest LLM available and pushes that to gh, right?

Comment by ModernMech 22 hours ago

The next step after that is going to be celebrity forks -- whether top devs and/or Milla Jovovich have forked your repo.

Comment by ffoster007 16 hours ago

Some projects I've encountered seem unremarkable, yet surprisingly, they have a lot of stars.

Comment by shivasurya 19 hours ago

Star might be the weakest signal of project usefulness and also trust is eroding I no longer trust stars for security.

Comment by sigurfan 16 hours ago

IBM bought DataStax primarily based on Langflows fake stars. DataStax CEO shenanigans...

Comment by neehao 11 hours ago

Comment by ghstinda 10 hours ago

hacker news should get rid of the upvotes, i thought that was the lamest part of the site that and the openai smooching

Comment by odyssey7 19 hours ago

The fact that this article resonates, should I infer that the startup economy is picking back up?

Comment by Oras 23 hours ago

Would be nice to see the ratio of OpenClaw stars

Comment by az226 23 hours ago

99% stars from Claws themselves

Comment by lacunary 19 hours ago

Flair Driven Development. What do you think about a project that only has 15 pieces of flair?

Comment by mercurialsolo 22 hours ago

Stars are like for developers? and you have a bunch of creators now entering the arena. what did you expect?

Comment by random__duck 17 hours ago

So this is what real investigative journalism in the tech sector looks like ?

Comment by izucken 15 hours ago

I should stop starring as a joke or as a bookmark...

Comment by swordsith 14 hours ago

Very obviously AI written, could tell by the end of the first sentence. Could've been a interesting read.. if you wrote it.

Comment by evilsocket 1 hour ago

It does not make the numbers, research and original research on top of which this is based, wrong :D

Comment by mercurialsolo 22 hours ago

Cost of signalling is way lesser than the cost of verification.

Comment by umrashrf 22 hours ago

The stick of God doesn't make sound. God's work indeed

Comment by ImJasonH 21 hours ago

Why would OpenAI have bought stars for openai-fm I wonder?

Comment by shantnutiwari 20 hours ago

Social media platforms (like Instagram) have always had this problem of "buying" followers. There was an article some time ago where hollywood types would only give roles to people with high followers so people started buying followers.

Now that money is flowing to Github stars, no wonder people are buying fake "stars"? Seems capitalism is working as expected...

Comment by ludjer 20 hours ago

Wait I can sell my github account for 5k ? Wow

Comment by 20 hours ago

Comment by feverzsj 21 hours ago

Maybe there is also fake upvote economy here.

Comment by pdyc 19 hours ago

i was with them until

"We ran our own analysis sampling 150 profiles per repo across 20 projects and found repos where 36-76% of stargazers have zero followers and fork-to-star ratios 10x below organic baselines"

This does not looks like appropriate signal to use on github, i doubt that this is organic baseline.If this is used as metric than study might be flawed.

Comment by ildari 22 hours ago

Bots are killing opensource, but they pump product metrics so nobody cares. I maintain an open source repo and we've made a decision to limit all bot activity, even if it makes us less sexy in front of VCs.

We figured out a workaround to limit activity to prior contributors only, and add a CI job that pushes a coauthored commit after passing captcha on our website. It cut the AI slop by 90%. Full write-up https://archestra.ai/blog/only-responsible-ai

Comment by nryoo 23 hours ago

The real metric is: does it solve my problem, and is the maintainer still responding to issues? Everything else is just noise.

Comment by LtWorf 12 hours ago

I briefly mentioned it in a talk at minidebconf a couple years ago.

Download counters are abused similarly and are even easier to inflate.

Understanding the real popularity of a project is now even harder with all the AI bots spamming about it.

Comment by aanet 15 hours ago

If you think a github star is an accolade of some sorts, let me show you...

...the "Likes" on a post - on FB, twttr, LI, HN, ...

...the "Hearts" on post

...the "bookmarks" on a post

...the "upvotes"

...its corollary, the "downvotes"

...the fake dollars in your fake game

...the fake lives in your fav fantasy game

...ad inf

Comment by Cider9986 19 hours ago

It's not that I hate AI writing, it's just that I hate it.

Comment by ossusermivami 22 hours ago

what is this one about:

> When nobody is forking a 157,000-star repository, nobody is using it

that is completely not true, i don't fork a repo when i use it, only when i want to contribute to it (and usually cleanup my forks)

Comment by jiveturkey 13 hours ago

6 million. is that a lot? it's too bad they don't tell us.

but i think based on their statement that north of 90% of the buying repos were terminated by github, i'd say there would be very very many more fake stars without any github intervention.

i guess i just wish they hadn't made the first words of the article "Six million fake stars" without putting that into scale.

Comment by dathinab 21 hours ago

wait people trust GH start for like anything????

Comment by ozgrakkurt 23 hours ago

> Jordan Segall, Partner at Redpoint Ventures, published an analysis of 80 developer tool companies showing that the median GitHub star count at seed financing was 2,850 and at Series A was 4,980. He confirmed: "Many VCs write internal scraping programs to identify fast growing github projects for sourcing, and the most common metric they look toward is stars."

> Runa Capital publishes the ROSS (Runa Open Source Startup) Index quarterly, ranking the 20 fastest-growing open-source startups by GitHub star growth rate. Per TechCrunch, 68% of ROSS Index startups that attracted investment did so at seed stage, with $169 million raised across tracked rounds. GitHub itself, through its GitHub Fund partnership with M12 (Microsoft's VC arm), commits $10 million annually to invest in 8-10 open-source companies at pre-seed/seed stages based partly on platform traction.

This all smells like BS. If you are going to do an analysis you need to do some sound maths on amount of investment a project gets in relation to github starts.

All this says is stars are considered is some ways, which is very far from saying that you get the fake stars and then you have investment.

This smells like bait for hating on people that get investment

Comment by dr_kretyn 20 hours ago

Can now someone make analysis of Google Ads fake economy? I'm convinced that the data they share - specifically clicks - is false, and potentially they're paying some fraction to people to click it.

Comment by matheusmoreira 15 hours ago

My first encounter with this was when Anthropic offered like 6 months of Claude Max 20x to open source developers with "5,000+ GitHub stars".

https://claude.com/contact-sales/claude-for-oss

> Who should apply:

> You’re a primary maintainer or core team member of a public repo with 5,000+ GitHub stars

I can't blame people for maximizing star counts when benefits like these are tied to them. This is a $200 a month subscription, and it did tempt me a bit... Can't imagine what people would do if some venture capitalist dangled millions in front of them. I suppose they'd do pretty much anything.

It's weird that people are using stars as a signal though. Anyone can star a repository, it's essentially a public bookmark. I think the real popularity signal is the number of people participating in the project.

Comment by Applejinx 19 hours ago

No wonder I'm getting bombed with spammers: 0.1 fork/star ratio and 0.0527 watcher/star ratio for a 1.1kstar repo.

The thing is, they are all scammers whose emails go unopened… and the tragic thing is, most likely the VCs would require the same treatment if they did get all hyped up and try to get involved in my project.

There is nobody real who's desperately trying to reach me to extend a line of business credit. I'm not working in AI, rather the opposite, was not in crypto, etc etc, so I know it is just email scams from beginning to end, dozens every day.

It's kind of pitiful that if VCs tried to jump in, they would be indistinguishable from the scams.

Comment by k33n 21 hours ago

I spent a little bit of time at a PE firm last year. They went "all in" on ElizaOS because of the star hype. It was embarrassing.

Comment by onesandofgrain 21 hours ago

This is why people shouls use gitea. I dont understand why people keep using github at this point. Its not like theyve stolen all our data or anything:))

Comment by rvz 22 hours ago

Who ever thought that GitHub stars were a legitimate measure of a project's popularity does not understand Goodhart's Law and such metrics were easily abused, faked, gamed and manipulated.

Comment by kortilla 23 hours ago

I asked Claude for an analysis on the maturity of various open source projects accomplishing the same thing. Its first searches were for GitHub star counts for each project. I was appalled at how dumb an approach that was and mortified at how many people must be espousing that equivocation online to make the training jump to that.

Comment by scotty79 23 hours ago

Definite proof that github is social network for programmers.

Comment by bjourne 23 hours ago

> The CMU researchers recommended GitHub adopt a weighted popularity metric based on network centrality rather than raw star counts. A change that would structurally undermine the fake star economy. GitHub has not implemented it.

> As one commenter put it: "You can fake a star count, but you can't fake a bug fix that saves someone's weekend."

I'm curious what the research says here---can you actually structurally undermine the gamification of social influence scores? And I'm pretty sure fake bugfixes are almost trivial to generate by LLMs.

Comment by az226 23 hours ago

I’d say those CMU researchers are out of touch with the reality. GitHub can easily overhaul this with a much better system than what those researchers recommended but chooses not to.

Comment by evilsocket 21 hours ago

that's exactly the next-round attack. StarScout's network-centrality defense works for the current generation of campaigns but won't survive LLM-generated PR/commit patterns

Comment by cat-whisperer 20 hours ago

now that we have AI, and github is backed by microsoft. we should ask users to justify their stars. and then they should have a classifier

Comment by _blk 13 hours ago

I'm starstruck. Honestly sad but not surprising since we live in the age where attention is a currency anything will be done to attempt to buy attention.

Just look at how many cool and legit open projects have the star-meter graph in their README.md - so of course people will start measuring against that metric and start gaming it.

I was surprised myself when I suddenly saw a starstruck badge on my profile. I never advertise my projects but I do feel honored when people think that my contributions are useful and stars are an easy way of showing that gratitude. At least I think that's how it was intended. And now someone is breaking that for scraps (or not scraps.)

This is exactly the bs that pushes services to not offer their own logins anymore, now you have to login with FB or GH or $randomFamousSvc instead of the more anonymous "by email" - just happened to me recently when I wanted to use a trial account, but I totally get it - with abuse trust is substituted with control. It's the same everywhere.. even voter ID.

Sorry, that went off track. I guess just don't look at the stars anymore. Wait, no, don't do that, stars are beautiful and so are you if you read all the way to here. Here's a * for you :)

Comment by fontain 23 hours ago

https://x.com/garrytan/status/2045404377226285538

“gstack is not a hypothetical. It’s a product with real users:

75,000+ GitHub stars in 5 weeks

14,965 unique installations (opt-in telemetry, so real number is at least 2x higher)

305,309 skill invocations recorded since January 2026

~7,000 weekly active users at peak”

GitHub stars are a meaningless metric but I don’t think a high star count necessarily indicates bought stars. I don’t think Garry is buying stars for his project.

People star things because they want to be seen as part of the in-crowd, who knows about this magical futuristic technology, not because they care to use it.

Some companies are buying stars, sure, but the methodology for identifying it in this article is bad.

Comment by eddythompson80 11 hours ago

Garry Tan is an imbecile though.

Comment by evilsocket 21 hours ago

[dead]

Comment by crazyjudenstern 17 hours ago

I haven't seen that many stars since the Holocaust.

Comment by drcongo 22 hours ago

I got gently admonished on here a while back for mentioning that I find those star graph things people put on their READMEs to have entirely the opposite effect than that which was intended. I see one of those and I'm considerably less likely to trust the project because a) you're chasing a stupider metric than lines of code, and b) people obviously buy stars.

Comment by Grappelli 20 hours ago

[dead]

Comment by tesders 15 hours ago

[dead]

Comment by 18 hours ago

Comment by jimmypk 21 hours ago

[dead]

Comment by dfhvneoieno 13 hours ago

[dead]

Comment by X1a0Ch3n 19 hours ago

[dead]

Comment by Talderigi 19 hours ago

[dead]

Comment by allgirl 20 hours ago

[dead]

Comment by T3RMINATED 21 hours ago

[dead]

Comment by m00dy 23 hours ago

same here on HN as well

Comment by tomhow 11 hours ago

The software to detect voting manipulation, collusion abd sockpuppet/astroturf commenting on HN is both the oldest and the most actively developed, and is the main reason it’s considered a strong signal of credibility when a project makes it to the front page. The service that claims to sell upvotes has never had any impact on HN.

Comment by onesandofgrain 21 hours ago

Yep, shilling and paid advertisement mascerading as posts espegially on ai

Comment by ghstinda 10 hours ago

really hilarious the irony, i agree

Comment by RITESH1985 23 hours ago

The fake star problem is a symptom of a deeper issue — developers can't tell signal from noise in the agent ecosystem. The tools that actually get real adoption are the ones that solve acute production problems. Agents are hitting these in production issues of state management every day and there's almost no tooling for it. That's where genuine organic stars come from — solving a real pain, not gaming rankings