Cal.com is going closed source
Posted by Benjamin_Dobell 1 day ago
Comments
Comment by simonw 1 day ago
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
Comment by dang 1 day ago
Comment by DrammBA 1 day ago
Comment by OsrsNeedsf2P 1 day ago
The real answer is they are likely having a hard time converting people to paid plans
Comment by notnullorvoid 1 day ago
That's a very weak moat unless you have something else like the friction of network dependence similar to a social network.
Comment by eloisant 1 day ago
You have to bring value that goes beyond the source code and hosting, otherwise your clients are going to vibe code a custom solution instead of paying you.
Comment by OccamsMirror 1 day ago
How many things do you want to be responsible for? How many vibe coded projects do you want to maintain?
I think this line of reasoning is overblown. Just because you can doesn't mean a significant number of people will. I think the 3D printer comparison is apt.
Comment by eloisant 1 day ago
Enterprise customers have the means to develop in house, those are the customers that will leave. And those are the whales of the Saas business.
Comment by OccamsMirror 1 day ago
Comment by Gormo 9 hours ago
Most enterprise companies don't develop everything in house, but usually do have a varied mix of in-house infrastructure, IaaS and PaaS solutions, and SaaS products. Large organizations across varied industries often have multiple internal dev teams, and the availability of increasingly sophisticated AI tools is going to enable the same teams to be effective at more, and more complex, projects. AI will definitely start shifting make-or-buy decisions, especially for mature, commodity use cases, to 'make'.
Comment by chasd00 1 day ago
Comment by ImPostingOnHN 1 day ago
> Enterprises have a business to run and don’t want to run a software shop on top of everything else.
It sounds like you mostly understand here. The biggest part of "running a software shop" they want to avoid is responsibility for support, bugs, fires, ongoing maintenance, and legal issues, of post-release software.
Dave's Pizza around the corner doesn't make a social media app, not because Dave can't figure it out, not because he can't vibe code one, not because he can't contract someone to do it, but because running a social media site isn't a core competency of Dave's Pizza. Instead, Dave uses existing social media sites, and focuses his efforts and passions on making pizza.
Comment by chasd00 1 day ago
saas value to an enterprise is more than just the functionality provided and I think that is lost on a lot of the heads down software devs here.
Comment by eloisant 1 day ago
Comment by OccamsMirror 1 day ago
Comment by habinero 4 hours ago
It sounds nice, but now you have something that takes an enormous amount of time and effort to use and maintain, plus you need to have someone with the skills to run it.
Comment by runako 1 day ago
This is why companies outsource anything. Google, Inc. is big enough to own farms and ranches to grow the food eaten in its cafeterias. They could make trucks to transport that food. They could operate factories to make cutlery, etc. Why do they instead choose to pay layers of margins to layers of middlemen?
Absurd example? How about Apple? They outsource production of their chips, instead of capturing the margin they are currently gifting to their partners. Why?
Delta Airlines doesn’t operate oil fields or even refineries even though a major cost of their operations is jet fuel. Why?
Once you can reason through these very simple examples, you will understand why enterprises are unlikely to walk away from SaaS.
Comment by codytruscott 1 day ago
https://en.wikipedia.org/wiki/Trainer_Refinery
https://www.reuters.com/business/energy/delta-air-lines-refi...
Comment by runako 23 hours ago
s/Delta/United/ or s/Delta/Southwest/ or s/Delta/Lufthansa/. Or if you prefer, s/refinery/oilfield, or s/refinery/pipeline. Or even s/refinery/farm/ because Delta also buys food in vast quantities (I would not be surprised to find they have interests in ag producers that offset a small % of their food purchases, which does not diminish the argument).
Delta also does not make airplanes, jet engines, seats, radios, GPS, glass, or even wires. They don't distill the spirits they serve on their flights. They don't own and operate a satellite Internet capability. They don't even make movies for in-flight entertainment.
The point is that Delta, like most successful firms, outsources key aspects of core service delivery.
The second article you linked says plainly that the refinery is an offset/hedge. QED Delta still outsources the vast majority of its fuel costs. (They could, for example, own large swathes of the Permian and do E&P as well. They choose to leave that to others.)
Comment by Gormo 9 hours ago
Most large firms have in-house software dev teams responsible for at least some portion of their development work. I know software engineers locally working, variously, at banks, pet supply distributors, power companies, soft drink bottlers, and many other non-tech industries. And AI can and will extend these teams' capacity to internally manager larger segments of their companies' tech stacks.
Comment by duncangh 17 hours ago
Comment by chii 1 day ago
here comes the next SaaS idea - vibe coded services as a service. You tell what service you want, may be point out a couple examples, and you get that service vibe coded and hosted for you for a small monthly fee!
Comment by hvb2 1 day ago
So, no, hosting LLM output is not the same as being responsible
Comment by philipov 1 day ago
Comment by shimman 1 day ago
This company does not seem healthy at all:
https://getlatka.com/companies/calcom
I agree with the other poster that mention this is likely a publicity stunt but all it's really showing is that VC is still incredibly stupid with their money. All the more reason to seize it from them then properly fund useful software and not subsidize vanity projects for stanford grads.
Comment by cootsnuck 1 day ago
I wouldn't under estimate switching friction.
Comment by hrimfaxi 1 day ago
Comment by jmcgough 1 day ago
Comment by notnullorvoid 12 hours ago
Comment by rhubarbtree 1 day ago
Comment by opem 1 day ago
Comment by lrvick 1 day ago
Comment by il-b 1 day ago
Do it then
Comment by indianmouse 1 day ago
Comment by j45 1 day ago
Comment by TeMPOraL 1 day ago
Comment by theahura 1 day ago
Comment by klempner 1 day ago
Generally speaking it is very very difficult to have a license redefine legal terms. Either this theseus copy is legally a derivative work or it isn't, and text of a license is going to do at most very very little to change that.
Comment by Gormo 9 hours ago
The "Ship of Theseus" license you've linked to attempts to define for itself what constitutes a derivative work, but what is and is not a derivative work is determined by copyright law itself, and there's no concept of imposing licensing conditions on works that your copyright never extended to in the first place.
Simply put, if something isn't infringing your copyright under the criteria established by the law, then your permission was never needed to do it in the first place, so the conditions under which you would or would not be willing offer that permission are irrelevant.
Comment by hrimfaxi 1 day ago
Are you willing to bear the burden of litigation?
Comment by duskdozer 1 day ago
Comment by devmor 1 day ago
Comment by kaashif 1 day ago
But that is very unlikely even if everyone adopted it, which they won't.
Comment by eloisant 1 day ago
Comment by imtringued 1 day ago
Copyright can only deny the right to make copies.
If someone spends years using your software and they have learned a mental model of how your software works, they can build an exact replica and there is nothing you can do about that since there is no copy you can sue over. Said user is also allowed to use AI tools to aid in the process.
What you want is an EULA, which is a contract users explicitly have to agree with. A license file only grants access or the right to copy, it doesn't affect usage of your software.
Comment by bluebarbet 1 day ago
Whether or not this is technically correct, a comment that begins this way is unlikely to be persuasive.
Comment by lisperforlife 1 day ago
Comment by bit1993 1 day ago
"AI slop is rapidly destroying the WWW, most of the content is becoming more and more low-quality and difficult to tell if its true or hallucinated. Pre-AI web content is now more like the golden-standard in terms of correctness, browsing the Internet Archive is much better. This will only cause content to go behind pay-walls, allot of open-source projects will be closed source not only because of the increased work maintainers have to do to not only review but also audit patches for potential AI hallucinations but also because their work is being used to train LLMs and re-licensed to proprietary."
Comment by teleforce 1 day ago
Replace AI with "open source and Linux", and "open source" with "Windows" in the statements. That's what Microsoft's PR team would have said about open source and Linux about 20 years back in the 2000s.
After the unsuccessful FUD era, now Microsoft is running away with Linux by running its Windows alongside via WSL to combat MacOS Unix-like popularity, and due to Linux and open source dominance in the cloud OS demographic.
Comment by TeMPOraL 1 day ago
Comment by pietz 1 day ago
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
(I might be very wrong here)
Comment by Gormo 8 hours ago
But this has always been the reality of security: it's always been fundamentally an economic question about which party has stronger incentives and greater resources than the other. The increasing sophistication of AI is available to both parties equally, so I don't see how AI in itself fundamentally changes the equation.
Comment by mgdev 1 day ago
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
Comment by cassianoleal 1 day ago
So each time you roll the dice you gamble on getting a fresh set of 0-days? I don't get why anyone would want this.
Comment by mgdev 1 day ago
Project model capabilities out a few years. Even if you only assume linear improvement at some point your risk-adjusted outcome lines cross each other and this becomes the preferred way of authoring code - code nobody but you ever sees.
Most enterprises already HATE adopting open source. They only do it because the economic benefit of free reuse has traditionally outweighed the risks.
If you need a parallel: we already do this today for JIT compilers. Everything is just getting pushed down a layer.
Comment by jodrellblank 1 day ago
You’ll accept the delay because by then it happens faster than Microsoft can make a splashscreen and window open from a local nvme drive. And because you can customise Excel’s feature set by simply posting a Reddit comment where you hallucinate using a feature that Excel doesn’t have and waiting a couple of days.
[although it can be difficult to find the real Reddit to post on as your web browser will tend to synthesise the experience of visiting any website using a cloud AI model of every website without connecting to the real one at all. This was widely loved as a security measure and since most websites are AI written content on AI written codebases, makes less difference than you’d first think]
Comment by mgdev 14 hours ago
Comment by cassianoleal 22 hours ago
No I don't. I build predictable and deterministic pipelines. If I rebuild from a specific git sha, I expect the same output. If I get something different, I need to fix what's causing that.
Comment by mgdev 14 hours ago
Nothing precludes you from doing that with AI-gen code vs human-gen code. What you just described is downstream.
If you have a human authoring code, you re-roll every time they release a new version. AI just releases versions faster, and in response to different, faster-moving inputs.
Comment by xigoi 1 day ago
Comment by jstummbillig 1 day ago
That can't be right, can it? Given stable software, the relative attack surface keeps shrinking. Mythos does not produce exploits. Should be defenders advantage, token wise, no?
Comment by rhplus 1 day ago
Defenders have to find all the holes in all their systems, while attackers just need to find one hole in one system.
Comment by lexlambda 1 day ago
Comment by jstummbillig 1 day ago
Comment by JoshTriplett 1 day ago
AI in general will, don't worry. "Move fast and break things" makes more exploits than "move steadily and fix things" does.
Comment by Gormo 8 hours ago
Comment by JoshTriplett 8 hours ago
Comment by Gormo 7 hours ago
But why would responsible AI users -- actual engineers using it to accelerate grunt work, not vibe coders -- not use the AI tooling to increase their capacity to do all of the work it takes to avoid breaking things while still moving fast, relatively speaking?
Testing a new incremental feature against the entire extant codebase, not just the bits of it that they had the bandwidth to tackle within the deadline, seems like exactly the sort of thing well-disciplined engineering teams would use AI to do.
Comment by habinero 4 hours ago
For one, you architect your codebase into separate layers and logical chunks that are self-contained and can be reasoned about independently. That's not always possible, but you draw as many firm boundaries as you can. You don't ever want to be in the position where you have to test an entire codebase against your new change. That's a horrible nightmare scenario.
So you don't "test as much of the codebase as you have time for", you write tests for your code and the interface between it and other systems. Maybe integration or FE tests depending on what you have.
So testing against a whole codebase is rarely the problem, and if it is, you have bigger issues.
Also, LLMs don't make mistakes like humans do. They fuck up in weird unpredictable ways that mean you kinda have to treat them like a hostile adversary trying to sneak in subtle backdoors. It slows things down.
Also, actually writing code is usually the fast and easy part. It's all the other bits -- getting the requirements, building mockups, planning, review, standing up new infra etc etc etc. LLMs can't help with most of that.
Comment by paisawalla 1 day ago
Comment by skybrian 1 day ago
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
Comment by layer8 1 day ago
Comment by haritha-j 1 day ago
Comment by MerrimanInd 1 day ago
Comment by jeroenhd 1 day ago
There are ways to use LLM service providers that leave no tokens unused, by just billing per token. Unsurprisingly, this quickly becomes much more expensive than subscriptions.
Comment by lrvick 1 day ago
Comment by jeroenhd 1 day ago
The big, impressive models all scale well for multi-customer setups because of the efficiency batching provides, but the base cost to run models like that as even a small business is incredibly high. If you can't saturate your LLM hardware almost 24/7, the time to earn back your investment is high unless you choose inferior models that are worse at their job.
Comment by lrvick 6 hours ago
But also the Strix Halo 128 is pretty hard to beat.
Comment by Guillaume86 1 day ago
At the moment LLMs vendors are in market grab mode and take a loss on big subscription users, they are starting to try to move to profit but they must move carefully to not let a competitor steal their users so we will still have "cheap" tokens for a while.
Even if prices go up by a bit, they have the scale in their favor to optimize costs.
If commercial model providers go into "not competitive" territory with their prices compared to open models, wouldn't it always be cheaper to use an open models inference provider? They can take advantage of scale as well, and with no model moat, competition should keep prices honest.
And last ressort, renting GPU time in the cloud seem like a safer bet than buying a GPU to me?
Comment by throwuxiytayq 1 day ago
Comment by rswail 1 day ago
Comment by flying_sheep 1 day ago
This is true until certain point, unless the requirement / contract itself has loophole which the attacker can exploit it without limit. But I don't think this is the case.
Let's say, if someone found an loophole in sort() which can cause denial-of-service. The cause would be the implementation itself, not the contract of sorting. People + AI will figure it out and fix it eventually.
Comment by pllbnk 1 day ago
Comment by aleph_minus_one 1 day ago
This is not true.
The problem rather is that the managers of many companies don't allow their programmers to apply their knowledge about security - the programmers should rather weed out new features.
Comment by criddell 1 day ago
Comment by simonw 1 day ago
(I just hope they can learn to verify the exploits are valid before sharing them!)
Comment by Mordisquitos 1 day ago
Comment by dspillett 1 day ago
I might like to live there.
Comment by raincole 1 day ago
https://openssf.org/tag/google
"But that's Linux, how small libraries get audit budget..." fortunately LLM has eliminated the need to have small libraires in your dependency chain.
Comment by dspillett 1 day ago
I take back the “I might like to live there” :)
Comment by techpression 1 day ago
Comment by alienbaby 1 day ago
Llm's will find your issues faster, but not necessarily more accurately than a domain expert. But experts cost money and effort takes longer to apply.
Are llm's going to reduce everyone's wages because they are cheap labour?
Comment by tonymet 1 day ago
For projects with NO WARRANTY, the risk is minimal, so yes there are upsides.
For a commercial project like cal.com, where a breach means massive liability, they don’t have the resources to risk breaches in the short term for potentially better software in the long term.
Comment by not-chatgpt 1 day ago
I'd give them more credits if they use the AI slop unmaintainability argument.
Comment by habinero 4 hours ago
Comment by ryanleesipes 1 day ago
Our scheduling tool, Thunderbird Appointment, will always be open source.
Repo here: https:// github.com/thunderbird/appointment
Come talk to us and build with us. We'll help you replace Cal.com
Comment by raybb 1 day ago
Sounds like a great tool though. How much is the hosted version?
Comment by ryanleesipes 1 day ago
Comment by bean469 1 day ago
Comment by devmount 1 day ago
Comment by m3nu 1 day ago
Comment by sashimimono 1 day ago
Comment by hedora 1 day ago
As a datapoint: FF + Chrome with lots of stuff open uses 2.6GB on my machine. With XFCE and a GB of other apps, it’s using about 4GB. 15 year old machine. Perf is fine.
Comment by carlosjobim 1 day ago
Comment by sashimimono 1 day ago
Comment by jen729w 1 day ago
2. Gives email address.
3. Is told to join the waitlist.
4. Blocks email address given at 2.
Hardly a terrific experience.
Comment by kewisch 1 day ago
Comment by ryanleesipes 1 day ago
Comment by winrid 1 day ago
do we need an appointment :)
Comment by bean469 1 day ago
Comment by ezekg 1 day ago
Comment by ryanleesipes 1 day ago
Comment by ButlerianJihad 1 day ago
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
Comment by benjaminoakes 2 minutes ago
Comment by johnfn 1 day ago
Comment by utopiah 1 day ago
No, attackers are also rational economical actors. They don't randomly attack any software just for the aesthetics beauty of the process. They attack for bounty, for fame, for national interest, etc. No matter the reason it's not random and thus they DO have a budget, both in time and money. They attack THIS project versus another project because it's interesting to them. If it's not, they might move to another project but they certainly won't spend infinite time precisely because they don't have infinite resources. IMHO it's much more interesting to consider the realistic arm race then theoretical scenarii that never take place.
Comment by johnfn 21 hours ago
Comment by mixdup 1 day ago
Comment by johnfn 21 hours ago
Comment by techpression 1 day ago
Comment by stavros 1 day ago
Comment by rhubarbtree 1 day ago
Comment by johnfn 21 hours ago
Comment by r2vcap 1 day ago
It is also become a trend that LLM-assisted users are generating more low-quality issues, dubious security reports, and noisy PRs, to the point where keeping the whole stack open source no longer feels worth it. Even if the real reason is monetization rather than security, I can still understand the decision.
I suspect we will see more of this from commercial products built around a FOSS core. The other failure mode is that maintainers stop treating security disclosures as something special and just handle them like ordinary bugs, as with libxml2. In that sense, Chromium moving toward a Rust-based XML library is also an interesting development.
Comment by d3Xt3r 1 day ago
Comment by wartywhoa23 1 day ago
This game will end horribly.
Comment by vlapec 1 day ago
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
Comment by paprikanotfound 1 day ago
Comment by layer8 1 day ago
Comment by ygjb 1 day ago
LLMs, and tools built to use them, are violating a lot of assumptions these days.
Comment by thombles 1 day ago
Comment by sandeepkd 1 day ago
Comment by pcblues 1 day ago
Comment by Terretta 1 day ago
OTOH, their position seems to be "many LLMs make shallow bugs" is unhelpful; same as many eyes make shallow bugs considered unhelpful.
What seems genuinely needed by the open source economy to both surface these latent vulns and tamp down finding-slop is a new https://bughook.github.com/your/repo/ that these big LLMs (Mythos, etc.) support. Mythos understands if it's been used to find an vuln, and back end auto-reports verified findings the git service can feed to a Dependabot type tool.
Even better, price up Mythos to cover running a background verifier that gets the project, revalidates the issue, before that bughook.
Meanwhile, train it on these findings, so its future self doesn't create them.
Comment by pixel_popping 1 day ago
Comment by genxy 1 day ago
Comment by eloisant 1 day ago
Comment by sandeepkd 1 day ago
Comment by pianopatrick 1 day ago
Comment by samename 1 day ago
Comment by yawndex 1 day ago
Comment by evanelias 1 day ago
Did they ever promise to keep their codebase FOSS forever, in a way that differs from what they're already doing over at cal.diy? If not, I don't see why it would be reasonable to expect them to spend a huge amount of money re-scanning on every single commit/deploy in order to keep their non-"DIY" product open source.
Comment by layer8 1 day ago
Comment by pcblues 1 day ago
Comment by dgellow 1 day ago
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
Comment by fwip 1 day ago
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
Comment by earthnail 1 day ago
This! I love OSS but this argument seems to get overlooked in most of the comments here.
Comment by maxloh 1 day ago
Comment by gouthamve 1 day ago
I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
Comment by tecoholic 1 day ago
Comment by fhn 1 day ago
Comment by badgersnake 1 day ago
Wanna sack a load of staff? - AI
Wanna cut your consumer products division? - AI
Wanna take away the source? - AI
Comment by rhubarbtree 1 day ago
Comment by esafak 1 day ago
Comment by gp14 1 day ago
Comment by rhubarbtree 1 day ago
Comment by bensyverson 1 day ago
Comment by no_wizard 1 day ago
It has always been odd to me they didn’t have this functionality years ago. It’s been requested for a long long time
Comment by kartika36363 1 day ago
Comment by Tepix 1 day ago
I'm not sure I agree with Drew Breunig, however. The number of bugs isn't infinite. Once we have models that are capable enough and scan the source code with them at regular intervals, the likelihood of remaining bugs that can be exploited goes way down.
Comment by doytch 1 day ago
Comment by keeda 1 day ago
Comment by traderj0e 1 day ago
Comment by 1970-01-01 1 day ago
Comment by dspillett 1 day ago
Comment by jqbd 1 day ago
Comment by 1970-01-01 1 day ago
Comment by dspillett 1 day ago
Even if the back-end is never fully distributed any front-end code obviously has to be, and even if that contains minimal logic, perhaps little more than navigation & validation to avoid excess UA/server round-trip latency, the inputs & outputs are still easily open to investigation (by humans, humans with tools, or more fully automated methods) so by closing source you've only protected yourself from a small subset of vulnerability discovering techniques.
This is all especially true if your system was recently more completely open, unless a complete clean-room rewrite is happening in conjunction with this change.
Comment by 1970-01-01 1 day ago
Comment by behringer 1 day ago
Comment by ergocoder 1 day ago
Comment by Peer_Rich 1 day ago
Comment by simonw 1 day ago
Comment by doytch 1 day ago
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Comment by throwaway5752 1 day ago
Give him $100 to obtain that capability.
Give each open source project maintainer $100.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
Comment by toast0 1 day ago
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
Comment by evanelias 1 day ago
Comment by wild_egg 1 day ago
Comment by keeda 1 day ago
Comment by NetMageSCW 19 hours ago
Comment by sambaumann 1 day ago
Comment by bayindirh 1 day ago
You can keep the untested branch closed if you want to go with “cathedral” model, even.
Comment by Maken 1 day ago
Comment by senko 1 day ago
Comment by NetMageSCW 19 hours ago
Comment by otabdeveloper4 1 day ago
Comment by bakugo 1 day ago
Comment by hypeatei 1 day ago
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
Comment by discordianfish 1 day ago
Comment by ErroneousBosh 1 day ago
To what end? You can just look at the code. It's right there. You don't need to "hack" anything.
If you want to "hack on it", you're welcome to do so.
Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
Comment by pdntspa 1 day ago
Comment by quotemstr 1 day ago
Comment by diebillionaires 1 day ago
Comment by szszrk 1 day ago
So not really.
I think they went closed source as there are too many decent clones based off their code and they realized it's eating up their niche.
Comment by tudorg 1 day ago
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
Comment by microflash 1 day ago
Comment by opem 1 day ago
No you certainly didn't, otherwise you shouldn't have come up with such a meaningless excuse!
Comment by Hendrikto 1 day ago
So do that and fix your bugs. This post makes no sense.
Comment by usernametaken29 1 day ago
Comment by iancarroll 1 day ago
Comment by _pdp_ 1 day ago
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
Comment by evanjrowley 1 day ago
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
Comment by Gormo 8 hours ago
> The only open source that will remain will be the real open source projects that are true to the ethos.
Well, the second point seems like the answer to the previous question. The original model of monetizing FOSS -- support contracts, risk indemnification, etc., for an otherwise functionally equivalent product -- will still remain viable.
But those trying to thread the needle of trying to use open-source to push a "freemium" model are now going to hit a wall: if you were withholding features from the community version in order to paywall them for the premium version, and now AI has made it easy for users to add those features back without paying you, then you're screwed. The people who were going to use AI to bypass your paywall are still not going to be your customers, but you no longer have the differentiator to put you ahead of the competitors that were already closed-source to begin with for the customers who are willing to pay.
I originally deployed Cal.com because I wanted an open-source solution. But now, why would I choose a closed-source Cal.com over Calendly? If I'm forced to go SaaS, I'll probably go with the more widely used Calendly. If I'm not forced to go SaaS, I'll forego them both, and go back to something like EasyAppointments, knowing that I won't be in conflict with the authors if I choose to add my own "premium" features to it, whether with AI or by hand. All Cal.com did here was remove any chance that I'd ever pay them anything.
Comment by fedeb95 1 day ago
Otherwise, copying code and improving it with AI or with humans is the same, as long as the product improves.
I doubt that many semi-automatic AI copies can really improve a product more than the original team, for really valid products.
AI will be a filter of bad quality.
Comment by robinhood 1 day ago
I would rather say that the core product is not strong and differentiated enough to resist this new age of coding, and it's an attempt to protect revenues.
Comment by abound 1 day ago
Comment by sgbeal 1 day ago
(Enter name of large software vendor here) has long-since proven that security through obscurity is not a real thing.
Comment by whatiathisnon 1 day ago
What's worse is your choosing to keep it buggy behind closed doors so no one can see the bugs. That's 100% the wrong approach.
Comment by theahura 1 day ago
Comment by Gormo 7 hours ago
The "clean room" part of clean-room reverse engineering implies that there is no exposure to the original copyrighted code on the part of those doing the reimplementation, whether human developers or AI. Traditionally, if you're working of the source code itself, you have one party translate the source code back into a design document, specifying behavior, and then you have another party implement that design spec with original code.
If you already have a running copy of the software to model the behavior off of, then you don't need the original source code in the first place. So going closed source will have zero effect on the capacity of AI tools to be used for clean room reverse engineering: all you need is the runtime.
> But I'd want to see more adoption of something like the Ship of Theseus license (https://github.com/tilework-tech/nori-skillsets/pull/465/cha...) before giving up on open source entirely
This license doesn't seem valid: a license can't redefine what qualifies as a derivative work. That's determined by copyright law itself, and if copyright law says that a clean-room reimplementation isn't a derivative work, then it isn't restricted by copyright, so doesn't need a license in the first place.
Comment by sgbeal 1 day ago
Since such "clean room" implementations ostensibly do not see the source, it's arguably irrelevant whether such sources are open are not. Such implementations will happen regardless of whether the sources they're reimplementing are opened or closed.
Comment by NetMageSCW 19 hours ago
Whose theory? That makes no sense at all. The creator can spend the same amount on tokens whether it is closed or open source.
Comment by ezekg 1 day ago
Comment by Tepix 1 day ago
They should provide free continued git commit security analysis for open source projects. That would increase the quality of open source projects and would inspire more projects to go open source, which is also a win for the AI companies.
Comment by alienbaby 1 day ago
Scan everyone's code, for free. Make all code as secure as an llm can make it as a baseline.
Comment by zerotoship 4 hours ago
Comment by andsoitis 1 day ago
It seems like an easy decision, not a difficult one.
Comment by aswerty 1 day ago
One must assume this was a direction they wanted to move towards and this is the justification they thought would be most palatable.
Comment by m11a 1 day ago
Not to mention, I presume the core bits of Cal.com's source code are already in place and aren't going to change significantly?
Like, this feels like a business decision and not a security decision
Comment by woodruffw 1 day ago
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
Comment by com2kid 1 day ago
Proposition 2: The most popular shared libraries are going to be quickly torn apart by LLM security tools to find vulnerabilities
Proposition 3: After a brief period of mass vulnerability discovery, the overall quality of shared libraries will dramatically increased.
Conclusion: After the initial wave of vulnerabilities has passed, the main threat to open source code bases is in their own comparatively small amount of code.
Comment by dang 1 day ago
Open Source Isn't Dead - https://news.ycombinator.com/item?id=47780712
Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089
Comment by notnullorvoid 1 day ago
For example using something like Next.js means a very large chunk of important obscurity is thrown out the window. The same for any publicly available server/client isomorphic framework.
Comment by smetannik 1 day ago
Comment by mellosouls 1 day ago
I thought this was grandiose and projecting their own weakness onto others, an extremely unappealing marketing position that may get clicks in the short term but will undermine trust beyond that.
Comment by 8260337551 8 hours ago
Comment by Gormo 8 hours ago
This move by Cal.com seems to be transparently an attempt to maintain that paywall against users who'd otherwise just use LLMs to remove it. I guess it's back to EasyAppointments, which still seems to work just fine.
Comment by egorfine 1 day ago
That's right. Nothing.
Comment by wartywhoa23 1 day ago
Comment by egorfine 1 day ago
And given that they will not rewrite the whole codebase in the next few days it means that security vulnerabilities are still there to be discovered by someone willing to pay the AI tax.
Comment by swordsith 1 day ago
Comment by constantlm 1 day ago
Comment by codegeek 1 day ago
Comment by a-fadil 1 day ago
Comment by dnnddidiej 1 day ago
Maybe you are referring to the whole Github thing.
Comment by mynameisvlad 1 day ago
Comment by dnnddidiej 1 day ago
* Someone lols at code. Answer: ignore them.
* Someone sees your vulns. Answer: someone is already trying to hack you anyway.
Comment by wartywhoa23 1 day ago
Comment by bearsyankees 1 day ago
Comment by Nukahaha 1 day ago
Comment by ButlerianJihad 1 day ago
Comment by abusedmedia 1 day ago
Comment by james-clef 23 hours ago
Comment by evanjrowley 1 day ago
Comment by NetMageSCW 19 hours ago
Comment by amazingamazing 1 day ago
Comment by theturtletalks 1 day ago
Comment by NetMageSCW 19 hours ago
Comment by alance 1 day ago
Comment by eloisant 1 day ago
"But if everyone can read the source code, they'll be able to find vulnerabilities more easily!"
No. Security by obscurity has proven wrong.
Comment by ernsheong 1 day ago
Comment by axeldunkel 1 day ago
Comment by traderj0e 1 day ago
That said, I agree with another commenter that this seems like more of a business decision than a security one.
Comment by femto 1 day ago
Comment by rbbydotdev 21 hours ago
I think people really like how it's free (runs on google app scripts) and open source.
I've personally moved onto google's free gmail calendar scheduling tool, which strangely took pretty long to come to market. Calendly stretches back to ... 2013?
Scheduling, oddly feels a little niche (maybe less so today?), when it shouldn't be. Maybe there some more opportunity there.
Comment by wqtz 1 day ago
I always say to just stop with the virtue signaling led sales technique.
I despise the "we are like the market leader of our niche but open source" angle. Developer as a buyer and as a community these days in my opinion do not care about open source anymore. There is no long term value to that. The moment a product gets traction the open source elements is a constant mild headache as open source product means that they have no intellectual copyright on the core aspect of the product and it is hard to raise money or sell the company. And whenever a product gets traction they will take any excuse to make it close source again. With an open source product they are just coasting on brand. Regardless of what your personal opinion is, this has been largely true for most for-profit business.
Open source is largely is nothing more then a branding concept for a company who is backed by investors.
Comment by wartywhoa23 1 day ago
And a religion that was invented by those who wanted to have all the world's code for free to train AI to code.
Comment by thegdsks 1 day ago
Comment by dhruv3006 1 day ago
Comment by adamtaylor_13 1 day ago
This post's argument seems circular to me.
Comment by sreekanth850 1 day ago
Comment by asdev 1 day ago
Comment by mastermage 1 day ago
Comment by huslage 1 day ago
Comment by poisonborz 1 day ago
Comment by nativeit 1 day ago
Comment by lrvick 1 day ago
AI can clone something like cal.com with or without source code access, so in trying to pointlessly defend against AI they are just ruining the trust they built with their customers, which is the one thing AI can never create out of thin air.
We exclusively run our companies with FOSS software we can audit or change at any time because we work in security research so every tool we choose is -our- responsibility.
They ruined their one and only market differentiator.
We will now be swapping to self hosting ASAP and canceling our subscriptions.
Really disappointing.
Meanwhile at Distrust and Caution we will continue to open source every line of code we write, because our goal is building trust with our customers and users.
Comment by kartika36363 1 day ago
Comment by sadeshmukh 1 day ago
Comment by lapinovski 1 day ago
Comment by xnx 1 day ago
Comment by CamperBob2 1 day ago
AI also goes a long way towards erasing the distinction between source code and executable code. The disassembly skill of a good LLM is nothing short of jaw-dropping.
So going closed-source may be safer for SaaS, but closing the source won't save a codebase from being exploited if the binaries are still accessible to the public. In that sense, instead of dooming SaaS as many people have suggested AI will do, it may instead be a boon.
Comment by analogpixel 1 day ago
Comment by fontain 1 day ago
Comment by t0mas88 1 day ago
Comment by abound 1 day ago
Comment by fedeb95 1 day ago
Comment by ButlerianJihad 1 day ago
That is not true.
https://en.wikipedia.org/wiki/Security_through_obscurity
Security through obscurity doesn't work in isolation. It doesn't work as the only solution. It is discouraged, because it can be a panacea.
But it also doesn't hurt in many instances. Holding back your source code can be a strategic advantage. It does mean that adversaries can't directly read it (nor can your friends or allies!)
Having a proprietary protocol or file format, this is also "security through obscurity" and it may slow down or hinder an attacker. Obscurity may be part of a "defense in depth" strategy that includes robust and valid methods as well.
But it is harmful to baldly claim that "it doesn't work".
Comment by aizk 1 day ago
Comment by creatonez 1 day ago
Comment by jhatemyjob 1 day ago
Comment by post-it 1 day ago
- Well, did it work for those companies?
- No, it never does. I mean, these companies somehow delude themselves into thinking it might, but... but it might work for us.
Comment by jemiluv8 1 day ago
Comment by ltbarcly3 1 day ago
Hi {audience},
It is with a heavy heart that I have to announce that {thing we were going to do anyway} is necessary due to AI. AI has changed the industry and we are powerless to do anything other than {unpopular decision we were going to do regardless}.
Comment by theturtletalks 1 day ago
Comment by hmokiguess 1 day ago
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
Comment by dec0dedab0de 1 day ago
Comment by neuroelectron 1 day ago
Charge for api access, take a cut of the extensions economy.
How do i do that, I'm open source?
Comment by behringer 1 day ago
Comment by barelysapient 1 day ago
Comment by righthand 1 day ago
Comment by righthand 1 day ago
Comment by tokai 1 day ago
Comment by popalchemist 1 day ago
Comment by ezekg 1 day ago
Comment by liamgm 1 day ago
Comment by ezekg 1 day ago
Comment by pcblues 1 day ago
Comment by quotemstr 1 day ago
Comment by zb3 1 day ago
Comment by dspillett 1 day ago
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
Comment by zb3 1 day ago
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
Comment by redoh 1 day ago
Comment by sanghyunp 1 day ago
Comment by equinox6380 1 day ago
Comment by seyz 1 day ago
Comment by redsocksfan45 1 day ago
Comment by rvz 1 day ago
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
Comment by hayleox 1 day ago
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
Comment by popalchemist 1 day ago
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
See: https://www.youtube.com/watch?v=2CieKDg-JrA
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Comment by hayleox 1 day ago
Comment by popalchemist 1 day ago
I think companies make decisions like this from a tactics level, not realizing that by doing so they are not only alienating their customers but misunderstanding the basic (often unconscious or unspoken) social contract upon which their very existence is predicated.
Calendly already existed. Cal came along and said, ok, but what if the code were out in the open -- auditable, self-hostable. Then you wouldn't have to worry about lock-in, security, privacy, etc, in the same way. Now they are removing that entire aspect of their value prop. It may be the only thing that caused a good portion of their customers to adopt in the first place.
Comment by wild_egg 1 day ago
Comment by ltbarcly3 1 day ago
Comment by zb3 1 day ago
Then good, that overengineered, intentionally-crippled crap should go away.