AI agents are starting to eat SaaS
Posted by jnord 16 hours ago
Comments
Comment by benzible 8 hours ago
For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
Comment by SkyPuncher 1 hour ago
Yes, certain parts of our product are indeed just lightweight wrappers around an LLM. What you're paying for is the 99% of the other stuff that's (1) either extremely hard to do (and probably non-obvious) (2) an endless supply of "routine" work that still takes time (3) an SLA/support that's more than "random dev isn't on PTO"
Comment by robofanatic 2 minutes ago
Is that a bluff used to negotiate the price?
Comment by bob1029 5 hours ago
This is the answer to a happy B2B SaaS implementation. It doesn't matter what tools you use as long as this can be achieved.
In the domain of banking front/back office LOB apps, if you aren't iterating with your customer at least once per business day, you are definitely falling behind your competition. I've got semi-retired bankers insisting that live edits in production need to be a fundamental product feature. They used to be terrified of this. Once they get a taste of proper speed it's like blood in the water. I'm getting pushback on live reloads taking more than a few seconds now.
Achieving this kind of outcome is usually more of a meat space problem than it is a technology problem. Most customers can't go as fast as you. But the customer should always be the bottleneck. We've done things like install our people as temporary employees to get jobs done faster.
Comment by igortg 2 hours ago
Comment by chrisweekly 1 hour ago
Comment by wouldbecouldbe 1 hour ago
Being unrestrained by team protocols, communications, jira boards, product owners, grumpy seniors.
They can now deliver much more mature platforms, apps, consumer platforms without any form of funding. You can easily save months on the basics like multi tenant set up, tests, payment integration, mailing settings, etc.
It does seem likely that the software space is about to get even crowdier, but also much more feature rich.
There is of course also a wide array of dreamers & visionairies who know jump into the developer role. Wether or not they are able to fully run their own platform im not sure. I did see many posts asking for help at some point.
Comment by rapind 14 minutes ago
I've also eliminated some third party SaaS integrations by creating slimmer and better integrated services directly into my platform. Which is an example of using AI to bring some features in-house, not primarily to save money (generally not worth the effort if that's the goal), but because it's simply better integrated and less frustrating than dealing with crappy third-party APIs.
Comment by jeswin 6 hours ago
That's not the threat model. The threat model is that they won't have to - at some point which may not be right now. End users want to get their work done, not learn UIs and new products. If they can get their analysis/reports based on excels which are already on SharePoint (or wherever), they'd want just that. You can already see this happening.
Comment by TeMPOraL 5 hours ago
It's an ugly truth product owners never wanted to hear, and are now being forced to: nobody wants software products or services. No one really wants another Widgetify of DoodlyD.oo.io or another basic software tool packaged into bespoke UI and trying to make itself a command center of work in their entire domain. All those products and services are just standing between the user and the thing the user actually wants. The promise of AI agents for end-users is that of having a personal secretary, that deals with all the product UI/UX bullshit so the user doesn't have to, ultimately turning these products into tool calls.
Comment by enraged_camel 51 minutes ago
We built an AI-powered chat interface as an alternative to a fully featured search UI for a product database and it has been one of the most popular features of 2025.
Comment by skywhopper 5 hours ago
Comment by TeMPOraL 5 hours ago
Comment by cpursley 2 hours ago
Comment by nprateem 4 hours ago
Comment by ethbr1 3 hours ago
For purposes of this thread, if chat AI becomes the primary business interface, then every service behind that becomes much easier to replace.
Comment by testbjjl 1 hour ago
Comment by immibis 2 hours ago
And if you build an AI interface to your product, you can make it not work in subtly the right ways that direct more money towards you. You can take advertising money to make the AI recommend certain products. You can make it give completely wrong answers to your competitors.
Comment by indymike 1 hour ago
I keep hearing this and seeing people buying more Widgetify of DoodlyD.oo.io. I think this is more of a defensive sales tactic and cope for SaaS losing market share.
Comment by adriand 3 hours ago
Think about all the cycles this will save. The CEO codes his own dashboards. The OP has a point.
Comment by William_BB 2 hours ago
This sounds like a vibe coding side project. And I'm sorry, but whatever he builds will most likely become tech debt that has to be rewritten at some point.
Comment by nlake906 1 hour ago
Comment by hrimfaxi 2 hours ago
Comment by hobs 56 minutes ago
Focus on the simple iteration loop of "why is it so hard to understand things about our product?" maybe you cant fix it all today but climb that hill more instead of make your CEO spend some sleepless nights on a thing that you could probably build in 1/10th the time.
If you want to be a successful startup saas sw eng then engaging with the current and common business cases and being able to predict the standard cache of problems they're going to want solved turns you from "a guy" to "the guy".
Comment by vlugovsky 1 hour ago
I have also seen multiple similar use cases where non-technical users build internal tools and dashboards on top of existing data for our users (I'm building UI Bakery). This approach might feel a bit risky for some developers, but it reduces the number of iterations non-technical users need with developers to achieve what they want.
Comment by testbjjl 1 hour ago
Comment by cdurth 1 hour ago
Comment by lm28469 2 hours ago
Even if they could, the vast majority of them will be more than happy to send $20-100 per month your way to solve a problem than adding it to their stack of problems to solve internally.
Comment by reactordev 28 minutes ago
Comment by popcorncowboy 2 hours ago
Comment by Crowberry 7 hours ago
As it stands today; just a bit of complexity is all that is required to make AI Agents fail. I expect the gap to narrow over the years of course. But capturing complex business logic and simplifying it will probably be useful and worth paying for a long time into the future.
Comment by agwp 3 hours ago
This means for any "manual" or existing workflow requiring a access to several systems, that requires multiple IT permissions with defined scopes. Even something as simple as a sales rep sending a DocuSign might need:
- CRM access
- DocuSign access
- Possibly access to ERP (if CRM isn't configured to pass signed contract status and value across)
- Possibly access to SharePoint / Power Automate (if finance/legal/someone else has created internal policy or process, e.g. saving a DocuSign PDF to a folder, inputting details for handover to fulfilment or client success, or submitting ticket to finance so invoicing can be set up)
Comment by thorawaytrav 2 hours ago
Comment by j45 4 hours ago
I never understood the evolvement around agents, they just appeared to me as Python scripts initially (Crewai 2-3 years ago).
The question is can people see that agents will evolve? Similar to how software evolves to handle the right depth of granularity.
Comment by ChicagoBoy11 1 hour ago
Comment by indymike 1 hour ago
Development tooling improvements usually are a temporary advantage end up being table stakes after a bit of time. I'm more worried that as agentic tooling gets better it obsoletes a lot of SaaS tools where SaaS vendors count on users driving conventional point and click apps (web, mobile and otherwise). I'm encouraging the companies I'm involved with to look to moving to more communication driven microexperience UIs - email, slack, sms, etc instead of more conventional UI.
Comment by adventured 1 hour ago
None of these people can apparently see beyond the tip of their nose. It doesn't matter if it takes a year, or three years, or five years, or ten years. Nothing can stop what's about to happen. If it takes ten years, so what, it's all going to get smashed and turned upside down. These agents will get a lot better over just the next three years. Ten years? Ha.
It's the personal interest bias that's tilting the time fog, it's desperation / wilful blindness. Millions of highly paid people with their livelihoods being disrupted rapidly, in full denial about what the world looks like just a few years out, so they shift the time thought markers to months or a year - which reveals just how fast this is all moving.
Comment by therealwhytry 6 minutes ago
Buyer pressure will eventually force process updates, but it is a slow burn. The bottleneck is rarely the tech or the partner, it's the internal culture. The software moves fast, but the people deeply integrated into physical infrastructure move 10x slower than you'd expect.
Comment by mlinhares 23 minutes ago
A lot of them will try though, just means more work for engineers in the future to clean this shit up.
Comment by CuriouslyC 2 hours ago
I'm gonna go ahead and guess that if you have open source competitors, within two years your moat is going to become marketing/sales given how easy it'll be to have an agent deploy software and modify it.
Comment by lwhi 2 hours ago
Corporates are allergic to risk; not to spending money.
If anything, I feel that SaaS and application development for larger organisations stands to benefit from LLM assisted development.
Comment by mbesto 35 minutes ago
My hot take - LLMs are exposing a whole bunch of developers to this reality.
Comment by Havoc 3 hours ago
Comment by ethbr1 3 hours ago
There's a huge subset of SaaS that's feature-frozen and being milked for ARR.
Comment by martinald 23 minutes ago
My article here isn't really aimed at "good" SaaS companies that put a lot of thought into design, UX and features. I'm thinking of the tens/hundreds of thousands+ of SaaS platforms that have been bought by PE or virtually abandoned, that don't work very well and send a 20% cost renewal through every year.
Comment by j45 3 hours ago
Comment by MLgulabio 6 hours ago
This wasn't even an option for a lot of people before this.
For example, even for non software engineering tasks, i'm at an advantage. "Ah you have to analyse these 50 excel files from someone else? I can write something for it"
I myself sometimes start creating a new small tool i wouldn't have tried before but now instead of using some open source project, i can vibe spec it and get something out.
The interesting thing is, that if i have the base of my specs, i might regenerate it later on again with a better code model.
And we still don't know what will happen when compute gets expanded and expanded. Next year a few more DCs will get online and this will continue for now.
Also tools like google firebase will get 1000x more useful with vibe coding. They provide basic auth and stuff like this. So you can actually focus on writing your code.
Comment by cpursley 4 hours ago
Comment by testbjjl 1 hour ago
Comment by cm277 3 hours ago
If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud. Which is an economics disruption, not a technological one: cloud added complexity and points of failure and yet it still disrupted a ton of companies, because it enabled new business models (SaaS for one).
So far, the only disruption I can see coming from LLMs is middleware/integration where it could possibly simplify complexity and reduce overall costs, which if anything will help SaaS (reduction of cost of complements, classic Christensen).
Comment by ethbr1 2 hours ago
> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.
"Cost of developing code" is a trivial and incomplete answer.
Coding LLMs disrupt (or will, in the immediate future)
(1) time to develop code (with cost as a second order effect)
(2) expertise to develop code
None of the analogs you provided are a correct match for these.
A closer match would be Excel.
It improved the speed and lowered the expertise required to do what people had previously been doing.
And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.
The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...
... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.
Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.
The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.
To the point about cloud, what did it disrupt?
(1) time to deploy code (with cost as a second order effect)
(2) expertise to deploy code
B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.
Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).
Comment by agentultra 1 hour ago
The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.
All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.
Comment by ethbr1 1 hour ago
The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.
And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.
The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.
Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'
Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?
In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).
It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.
Comment by agentultra 11 minutes ago
They already lock down everything enterprise wide and hate low-code apps and services.
But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?
Comment by mikert89 3 hours ago
Comment by Bridged7756 1 hour ago
Comment by latentsea 47 minutes ago
Comment by f311a 2 hours ago
Comment by William_BB 2 hours ago
Comment by raw_anon_1111 3 hours ago
Comment by xtiansimon 2 hours ago
Comment by raw_anon_1111 2 hours ago
Comment by Bombthecat 7 hours ago
Comment by ben_w 4 hours ago
I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.
That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.
Comment by j45 4 hours ago
The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.
Comment by ben_w 3 hours ago
Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.
Bubbles don't just mean tulips.
What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:
Runs on a local device, no way to extract profit to repay the cost of training.
Comment by j45 3 hours ago
Dot com is not super comparable to AI.
Dot com had very few users on the internet compared to today.
Dot com did not have ubiquitous e-commerce. The small group of users didn’t spend online.
Search engines didn’t have the amount of information online that there is today.
Dot com did not have usable high speed mobile data, or broadband available for the masses.
Dot com did not have social media to share and alas how things can work as quickly.
LLMs were largely applicable to industry when gpt 4 came out. We didn’t have the new terms of reference for non deterministic software.
Comment by ben_w 2 hours ago
"Can they keep charging money for it?", that's the question that matters here.
Comment by bpavuk 2 hours ago
shit, I'm stealing that quote! it's easier to seize an opportunity, (i.e. build a tool that fixes the problem X without causing annoying Y and Z side effects) but finding one is almost as hard as it was since the beginning of the world wide web.
Comment by j45 4 hours ago
Not being able to see this is a blind spot.
Domain expertise in an industry usually sits within the client, and is serviced to some degree by vendors.
Not all CEOs have deep domain expertise, nor do they often enough stick to one domain. Maybe that’s where a gap exists.
Comment by jwr 7 hours ago
The worry is that customers who do not realize the full depth of the problem will implement their own app using AI. But that happens today, too: people use spreadsheets to manage their electronic parts (please don't) and BOMs (bills of materials). The spreadsheet is my biggest competitor.
I've been designing and building the software for 10 years now and most of the difficulty and complexity is not in the code. Coding is the last part, and the easiest one. The real value is in understanding the world (the processes involved) and modeling it in a way that cuts a good compromise between ease of use and complexity.
Sadly, as I found out, once you spend a lot of time thinking and come up with a model, copycats will clone that (as well as they can, but superficially it will look similar).
Comment by ehnto 6 hours ago
Which I don't think can be replaced by AI in a lot of cases. I think in the software world we are used to things being shared, open and easily knowable, but a great deal of industry and enterprise domain knowledge is locked up inside in companies and will not be in the training data.
That's why it's such a big deal for an enterprise to have on prem tools, to avoid leaking industry processes and "secrets" (the secrets are boring, but still secrets).
A little career advice in there too I guess. At least for now, you're a bit more secure as a developer in industries that aren't themselves software, is my guess.
Comment by jwr 6 hours ago
Yes. I try to visit my customers as often as I can, to learn how they work and to see the production processes on site. I consider it to be one of the most valuable things I can do for the future of my business.
Comment by dismalpedigree 4 hours ago
While rolling the whole solution with an AI agent is not practical, taking a open source starting point and using AI to overcome specific workflow pain points as well as add features allows me to have a lower cost, specifically tailored solution to our needs.
Comment by jwr 3 hours ago
This is actually a serious problem for me: my SaaS has a lot of very complex functionality under the hood, but it is not easily visible, and importantly it isn't necessarily appreciated when making a buying decision. Lot control is a good example: most people think it is only needed for coding batches of expiring products. In reality, it's an essential feature that pretty much everyone needs, because it lets you treat some inventory of the same part (e.g. a reel) differently from other inventory of this part (e.g. cut tape) and track those separately.
AI-coding will help people get the features they know they need, but it won't guide them to the features they don't know they could use.
Comment by lonelyasacloud 4 hours ago
Comment by a2code 5 hours ago
Comment by jwr 5 hours ago
I used to love CL and wrote quite a bit of code in it, but since Clojure came along I can't really see any reason to go back.
Comment by a2code 4 hours ago
Comment by TeMPOraL 5 hours ago
You have a product, which sits between your users and what your users want. That product has an UI for users to operate. Many (most, I imagine) users would prefer to hire an assistant to operate that UI for them, since UI is not the actual value your service provides. Now, s/assistant/AI agent/ and you can see that your product turns into a tool call.
So the simpler problem is that your product now becomes merely a tool call for AI agents. That's what users want. Many SaaS companies won't like that, because it removes their advertising channel and commoditizes their product.
It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers. LLMs defeat that by turning the entire human experience into an API, without explicit coding.
Comment by mjr00 27 minutes ago
This is a big assumption, and not one I've seen in product testing. Open-ended human language is not a good interface for highly detailed technical work, at least not with the current state of LLMs.
> It's the same reason why API access to SaaS is usually restricted or not available for the users except biggest customers.
I don't... think this is true? Of the top of my head, aside from cloud providers like AWS/GCP/Azure which obviously provide APIs: Salesforce, Hubspot, Jira all provide APIs either alongside basic plans or as a small upsell. Certainly not just for the biggest customers. You're probably thinking of social media where Twitter/Reddit/FB/etc don't really give API access, but those aren't really B2B SaaS products.
Comment by jwr 3 hours ago
Comment by MangoToupe 4 hours ago
That's ridiculous. A good ui will improve on assistant in every way.
Do assistants have some use? Sure—querying.
Comment by ben_w 4 hours ago
True.
"Good" UI seems to be in short supply these days, even from trillion dollar corporations.
But even with that, it is still not "ridiculous" for many to prefer to "hire an assistant to operate that UI for them". A lot of the complexity in UI is the balance between keeping common tasks highly visible without hiding the occasional-use stuff, allowing users to explore and learn more about what can be done without overwhelming them.
If I want a spaceship in Blender and don't care which one you get — right now the spaceship models that any GenAI would give you are "pick your poison" between Diffusion models' weirdness and the 3D equivalent of the pelican-on-a-bike weirdness — the easiest UI is to say (or type) "give me a spaceship", not doing all the steps by hand.
If you have some unformatted time series data and want to use it to forecast the next quarter, you could manually enter it into a spreadsheet, or you could say/type "here's a JPG of some time series data, use it to forecast the next quarter".
Again, just to be clear, I agree with everyone saying current AI is only mediocre in performance, it does make mistakes and shouldn't be relied upon yet. But the error rates are going down, the task horizons they don't suck at are going up. I expect the money to run out before they get good enough to take on all SaaS, but at the same time they're already good enough to be interesting.
Comment by jillesvangurp 6 hours ago
The fallacy here is believing we already had all the software we were going to use and that AI is now eliminating 90% of the work of creating that. The reality is inverted, we only had a fraction of the software that is now becoming possible and we'll be busy using our new AI tools to create absolutely massive amounts of it over the next years. The ambition level got raised quite a bit recently and that is starting to generate work that can only be done with the support of AI (or an absolutely massive old school development budget).
It's going to require different skills and probably involve a lot more domain experts picking up easy to use AI tools to do things themselves that they previously would have needed specialized programmers for. You get to skip that partially. But you still need to know what you are doing before you can ask for sensible things to get done. Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.
Our typical customers would need help with all of that. The amount of times I've had to deal with a customer that had vibe coded anything by themselves remains zero. Just not a thing in the industry. Most of them are still juggling spreadsheets and ERP systems.
Comment by pjc50 5 hours ago
> Especially when things are mission critical, you kind of want to know stuff works properly and that there's no million $ mistakes lurking anywhere.
This is what I'm wondering about; things don't change because the company doesn't like change, and the risks of change are very real. So changes either have to be super incremental, or offer such a compelling advantage that they can't be ignored. And AI just doesn't offer the sort of reproducible, reliable results that manufacturing absolutely depends on.
Comment by jillesvangurp 3 hours ago
It's just that messing with a company's core manufacturing is something they don't do lightly. They work with multiple shifts of staff that are supposed to work in these environments. People generally don't have a lot of computer skills, so things need to be simple, repeatable, and easy to explain. Any issues with production means cost increases, delays happen, and money is lost.
That being said, these companies are always looking for better ways to do stuff, to eliminate work that is not needed, etc. That's your way in. If there's a demonstrable ROI, most companies get a lot less risk averse.
That used to involve bespoke software integrations. Those are developed at great cost and with some non trivial risk by expensive software agencies. Some of these projects fail and failure is expensive. AI potentially reduces cost and risk here. E.g. a generic SAP integration isn't rocket science to vibe code. We're talking well documented and widely used APIs here. You'd want some oversight and testing here obviously. But it's the type of low level plumbing that traditionally gets outsourced to low wages countries. Using AI here is probably already happening at a large scale.
Comment by lateforwork 14 hours ago
AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.
Comment by mjr00 14 hours ago
Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.
I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.
Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.
Comment by sayamqazi 7 hours ago
This shouldnt be the goal. The goal should be to build an AI that can tell you what is done and what needs to be done i.e. replace jira with natural interactions. An AI that can "see" and "understand" your project. An AI that can see it, understand it, build it and modify it. I know this is not happening for the next few decades or so.
Comment by mjr00 45 minutes ago
The difference is that an AI-coded internal Jira clone is something that could realistically happen today. Vague notions of AI "understanding" anything are not currently realistic and won't be for an indeterminate amount of time, which could mean next year, 30 years from now, or never. I don't consider that worth discussing.
Comment by jdthedisciple 8 hours ago
Are you as a dev still going to pay for analytics and dashboards that you could have propped up by Claude in 5 minutes instead?
Comment by rhubarbtree 8 hours ago
Generating code is one part of software engineering is a small part of SaaS.
Comment by InvertedRhodium 6 hours ago
Comment by re-thc 5 hours ago
Do you pay for OpenTelemetry? How is this related?
Comment by InvertedRhodium 4 hours ago
So, I ask again - how do you know that the service you're paying for is all of those things?
Comment by jdthedisciple 7 hours ago
Comment by vdfs 1 hour ago
Comment by CyanLite2 2 hours ago
Looks like we're headed back to the internal IT days of building customized LoB apps.
Comment by dboreham 1 hour ago
Comment by andy_ppp 15 hours ago
I’m pretty certain AI quadruples my output at least and facilitates fixing, improving and upgrading poor quality inherited software much better than in the past. Why pay for SaaS when you can build something “good enough” in a week or two? You also get exactly what you want rather than some £300k per year CRM that will double or treble in price and never quite be what you wanted.
Comment by Aurornis 14 hours ago
About a decade ago we worked with a partner company who was building their own in-house software for everything. They used it as one of their selling points and as a differentiator over competitors.
They could move fast and add little features quickly. It seemed cool at first.
The problems showed up later. Everything was a little bit fragile in subtle ways. New projects always worked well on the happy path, but then they’d change one thing and it would trigger a cascade of little unintended consequences that broke something else. No problem, they’d just have their in-house team work on it and push out a new deploy. That also seemed cool at first, until they accumulated a backlog of hard to diagnose issues. Then we were spending a lot of time trying to write up bug reports to describe the problem in enough detail for them to replicate, along with constant battles over tickets being closed with “works in the dev environment” or “cannot reproduce”.
> You also get exactly what you want rather than some £300k per year CRM
What’s the fully loaded (including taxes and benefits) cost of hiring enough extra developers and ops people to run and maintain the in house software, complete with someone to manage the project and enough people to handle ops coverage with room for rotations and allowing holidays off? It turns out the cost of running in-house software at scale is always a lot higher than 300K, unless the company can tolerate low ops coverage and gaps when people go on vacation.
Comment by torginus 6 hours ago
We often ended up discarding large chunks of these poorly tested features, instead of trying to get them to work, and wrote our own. This got to a point where only the core platform was used, and replacing that seemed to be totally feasible.
SaaS often doesn't solve issues but replaces them - you substitute general engineering knowledge and open-source knowhow with proprietary one, and end up with experts in configuring commercial software - a skill that has very little value on the market where said software is not used, and chains you to a given vendor.
Comment by mattmanser 4 hours ago
But what you're describing is the narrow but deep vs wide but shallow problem. Most SaaS software is narrow but deep. Their solution is always going to be better than yours. But some SaaS software is wide but shallow, it's meant to fit a wide range of business processes. Its USP is that it does 95% of what you want.
It sounds like you were using a "wide-shallow" SaaS in a "narrow-deep" way, only using a specific part of the functionality. And that's where you hit the problems you saw.
Comment by torginus 3 hours ago
It's full of features, half of which either do not work, or do not work as expected, or need some arcane domain knowledge to get them working. These features provide 'user-friendly' abstractions over raw stuff, like authing with various repos, downloading and publishing packages of different formats.
Underlying these tools are probably the same shell scripts and logic that we as devs are already familiar with. So often the exercise when forced to use these things is to get the underlying code to do what we want through this opaque intermediate layer.
Some people have resorted to fragile hacks, while others completely bypassed these proprietary mechanisms, and our build scripts are 'Run build.sh', with the logic being a shell or python script, which does all the requisite stuff.
And just like I mentioned in my prev post, SaaS software in this case might get tested more in general, but due to the sheer complexity it needs to support on the client side, testing every configuration at every client is not feasible.
At least the bugs we make, we can fix.
And while I'm sure some of this narrow-deep kinds of SaaS works well (I've had the pleasure to use Datadog, Tailscale, and some big cloud provider stuff tends to be great as well), that's not all there is that's out there and doesn't cover everything we need.
Comment by mattmanser 3 hours ago
You have bought a shallow but wide SaaS product, one with tons of features that don't get much development or testing individually.
You're then trying to use it like a deep but narrow product and complaining that your complex use case doesn't fit their OK-ish feature.
MS do this in a lot of their products, which is why Slack is much better than Teams, but lots of companies feel Teams is "good enough" and then won't buy Slack.
Comment by torginus 3 hours ago
I'm sure you have encountered the pattern where you write A that calls B that uses C as the underlying platform. You need something in A, and know C can do it, but you have to figure out how you can achieve it through B. For a highly skilled individual(or one armed with AI) , B might have a very different value proposition than one who has to learn stuff from scratch.
Js packages are perfect illustration of these issues - there are tons of browser APIs that are wrapped by easy-to-use 'wrapper' packages, that have unforeseen consequences down the road.
Comment by CuriouslyC 2 hours ago
On top of that, SaaS takes your power away. A bug could be quite small, but if a vendor doesn't bother to fix it, it can still ruin your life for a long time. I've seen small bugs get sandbagged by vendors for months. If you have the source code you can fix problems like these in a day or two, rather than waiting for some nebulous backlog to work down.
My experience with SaaS is that products start out fine, when the people building them are hungry and responsive and the products are slim and well priced. Then they get bloated trying to grow market share, they lose focus and the builders become unresponsive, while increasing prices.
At this point you wish you had just used open source, but now it's even harder to switch because you have to jump through a byzantine data exfiltration process.
Comment by andy_ppp 6 hours ago
Maybe write some tests and have great software development practices and most importantly people who care about getting the details right. Honestly there’s no reason for software to be like this is there? I don’t know how much off the shelf ERP software you have used but I wouldn’t exactly describe that as flawless and bug free either!
Comment by _pdp_ 14 hours ago
Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.
SaaS benefits from network effects - your internal tools don't. So overall SaaS is cheaper.
The reality is that software license costs is a tiny fraction of total business costs. Most of it is salaries. The situation you are describing the kind of dead spiral many companies will get into and that will be their downfall not salvation.
Comment by theshrike79 8 hours ago
Yes and no. If someone is controlling the SaaS selection, then this is true.
But I've seen startup phase companies with multiple slightly overlapping SaaS subscriptions (Linear + Trello + Asana for example), just because one PM prefers one over the other.
Then people have bought full-ass SaaS costing 50-100€/month for a single task it does.
I'd describe the "Use AI to make bespoke software" as the solution you use to round out the sharp edges in software (and licensing).
The survey SaaS wants extra money to connect to service Y, but their API is free? Fire up Claude and write the connector ourselves. We don't want to build and support a full survey tool, but API glue is fine.
Or someone is doing manual work because vendor A wants their data in format X and vendor B only accepts format Y. Out comes Claude and we create a tool that provides both outputs at the same time. (This was actually written by a copywriter on their spare time, just because they got annoyed with extra busywork. Now it's used by a half-dozen people)
Comment by _pdp_ 6 hours ago
The reason software licenses are easier to cut by the finance team when things are not going well is because software does not have feelings although we all know that this not making a dent. Ultimately software scales much better than people and if the software is "thinking" it will scale infinitely better.
Building it all in house will only happen for 2 reasons: 1. The problem is so specific that this is the only variable option and the quickest (fear enough). 2. Developers and management do not have real understanding of software costs.
Developers not understanding the real costs should be forgiven because most of them are never in position to make these type of decisions - i.e they are not trained. However a manager / executive not understanding this is sign of lack of experience. You really need to try to build a few medium-sized none essential software systems in-house to get an idea how bad this can get and what a waste of time and money it really is - resources you could have spent elsewhere to effect the bottom the real bottomline.
Also the lines of code that are written do not scale linearly with team sizes. The more code you produce the bigger the problem - even with AI.
Ultimately a company wants to write as few line of code as possible that extract as much value as feasibly possible.
Comment by physicsguy 7 hours ago
A lot of the SaaS target companies won't even have a CTO
Comment by tarsinge 8 hours ago
Comment by thisisit 10 hours ago
Building is only one part. Maintaining and using/running is another.
Onboarding for both technical and functional teams takes longer as the ERP is different from other company. Feature creep is an issue. After all who can say no to more bespoke features. Maybe roll CRM, Reporting and Analytics into one. Maintenance costs and priorities now become more important.
We have also explored AI agents in this area. People specific tasks are great use cases. Create mock up and wireframes? AI can do well and you still have human in the loop. Enterprise level tasks like say book closing for late company ERP? AI makes lot of mistakes.
Comment by technotony 14 hours ago
Comment by mikert89 14 hours ago
Comment by mattas 13 hours ago
Comment by OxfordOutlander 6 hours ago
To attempt to summarize the debate, there seems to be three prevailing schools of thought:
1. Status Quo + AI. SaaS companies will adopt AI and not lose share. Everyone keeps paying for the same SaaS plus a few bells and whistles. This seems unlikely given AI makes it dramatically cheaper to build and maintain SaaS. Incumbents will save on COGS, but have to cut their pricing (which is a hard sell to investors in the short term).
2. SaaS gets eaten by internal development (per OP). Unlikely in short/medium term (as most commenters highlight). See: complete cloud adoption will take 30+ years (shows that even obviously positive ROI development often does not happen). This view reminds me a bit of the (in)famous DropBox HN comment(1) - the average HN commenter is 100x more minded to hack and maintain their own tool than the market.
benzible (commenter) elsewhere said this well - "The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon."
This same logic explains why external boutique beats internal builds --
3. AI helps boutique-software flourish because it changes vendor economics (not buyer economics). Whereas previously an ERP for a specific niche industry (e.g. wealth managers who only work with Canadian / US cross-border clients) would have had to make do with a non-specific ERP, there will now be a custom solution for them. Before AI, the $20MM TAM for this product would have made it a non-starter for VC backed startups. But now, a two person team can build and maintain a product that previously took ten devs. Distribution becomes the bottleneck.
This trend has been ongoing for a while -- Toast, Procore, Veeva -- AI just accelerates it.
If I had to guess, I expect some combination of all three - some incumbents will adapt well, cut pricing, and expand their offering. Some customers will move development in house (e.g. I have already seen several large private equity firms creating their own internal AI tooling teams rather than pay for expensive external vendors). And there will be a major flourishing of boutique tools.
Comment by martinald 27 minutes ago
What _has_ surprised me though is just how many companies are (or are considering) building 'internal' tooling to replace SaaS they are not happy with. These are not the classic HN types whatsoever. I think when non technical people get to play with AI software dev they go 'wow so why can't we do everything like this'.
I think your point 3 is really interesting too.
But yes the point of my article (hopefully) wasn't that SaaS is overnight dead, but some thin/lower "quality" products are potentially in real trouble.
People will still buy and use expertly designed products that are really nice to use. But a lot of b2b SaaS is not that, its a slow clunky mess that wants to make you scream!
Comment by andy_ppp 3 hours ago
Comment by risyachka 5 hours ago
this means if I sell it to your business for the price of < your salary - you will get fired and business will use my version.
Why? because my will always be better as 10 people work on it vs you alone.
Internal versions will never be better or cheaper than saas (unless you are doing some tiny and very specific automation).
They can be better than current solution - but only a matter of time when someone makes a saas equal and better to what you do internally.
Sure almost anything will be better and cheaper that hubspot.
But with AI smaller CRMs that are hyper focused on businesses like yours will start popping up and eating its market.
Anything bigger than a toy project will always be cheaper/better to buy.
Comment by redwood 15 hours ago
Comment by returnInfinity 10 hours ago
Also maintaining a software is pain
Also for perpetually small companies, its now easy to build simple scripts to be achieve some productivity gains.
Comment by bigtones 14 hours ago
Comment by lwhi 15 hours ago
Comment by nkotov 23 minutes ago
Comment by vlugovsky 1 hour ago
A couple of them mentioned that they plan to cancel subscriptions totaling more than $100k/year for the apps they will replace with that SaaS. According to them, they have many subscriptions they keep only because of one feature. Another issue is that their workflows become a real mess when they need to copy and paste data into multiple tabs. Custom-built internal tools seem like an obvious solution. Those who migrate to custom-built tools, however, will face the challenge of orchestrating their lifecycle and creating a consistent deployment workflow, but this is one of the challenges we are trying to solve at UI Bakery.
In my understanding, SaaS products that provide customers access to proprietary data are in a much better position than other SaaS platforms. HubSpot’s acquisition of Clearbit a couple of years ago now makes even more sense because it will help them retain some of their clients.
Comment by runako 1 hour ago
This practice predates even SaaS.
I read this article expecting to see a specific SaaS that was at risk, and the most I saw was "dashboards." (Which: dashboards frequently aggregate data, while the ongoing work of collection/maintenance/etc. is done by more complex applications.)
The thesis seems to be that companies can use coding agents to build one-off internal versions of SaaS apps like e.g. Workday or Salesforce or Slack or Jira or MixPanel or HubSpot. Which, if one could make such a thing for free and maintain it for free, why not?
Fortunately/unfortunately depending on where you sit, magical thinking isn't going to get Claude Code to build Workday, regardless of the quality of your AGENTS.md. Sometimes I wonder if the people who write these takes have spent any real time using Claude Code. It's good, but please be realistic.
Comment by martinald 32 minutes ago
They've outgrown the current (industry specific) products, arguably a long time ago. The discussions started like this:
1) Started building custom dashboards on top of data exports of said product with various AI tooling. 2) This was extremely successful, as a non developer "business" person could specify, build and iterate on the exact analytics. Painful to work with a developer on this as you need to quickly iterate once you see the data and realise where your thinking was wrong. Non developers also really struggle to explain this in a way that makes sense from a developers PoV. 3) ERP system at play wanted a renewal price which was a big increase, and API deprecation. This would require a lot of existing (pre "AI") integrations to be rewrote/redone. 4) Now building an internal replacement. They would not have even considered this before AI Agents.
FWIW this tool is not super complex, but it is extremely expensive (for what it does). It already has a load of limitations which are being worked round with various levels of horrible hacks.
There are a _lot_ of these kind of SaaS products about, for each industry. You never really hear about them.
Btw I use claude code nearly every day for many hours. Opus 4.5 has been a huge leap forward, I am blown away with how it can do 10-30 minute sessions without going wrong (Sonnet definitely needed constant babysitting). And the models/agent harnesses are only getting better. Claude Code isn't even a year old yet!
Comment by biql 1 hour ago
Comment by sublimefire 4 hours ago
Going back to the beginning, I think we just lack good tools for other cases where agents could be used. Copilot is not great, chatgpt alone lacks some features to use it for business as is. I think what will happen is that we will see a lot more new tooling pop up that relies on agents in niche markets which will just amplify the power users. It will be another category of SAAS the companies will adopt.
Comment by ares623 15 hours ago
Comment by Imustaskforhelp 15 hours ago
Comment by sleazebreeze 14 hours ago
Comment by arealaccount 15 hours ago
- anything that requires very high uptime
-very high volume systems and data lakes
-software with significant network effects
-companies that have proprietary datasets
-regulation and compliance is still very important
Comment by Oarch 15 hours ago
Then it dawned on me how many companies are deeply integrating Copilot into their everyday workflows. It's the perfect Trojan Horse.
Comment by findjashua 15 hours ago
Comment by torginus 6 hours ago
For example, in RL, you have a train set, and a test set, which the model never sees, but is used to validate it - why not put proprietary data in the test set?
I'm pretty sure 99% of ML engineers would say this would constitute training on your data, but this is an argument you could drag out in courts forever.
Or alternatively - it's easier to ask for forgiveness than permission.
I've recently had an apocalyptic vision, that one day we'll wake up, an find that AI companies have produced an AI copy of every piece of software in existence - AI Windows, AI Office, AI Photoshop etc.
Comment by sotrusting 15 hours ago
Comment by mc32 15 hours ago
Comment by blibble 15 hours ago
if they can get away with it (say by claiming it's "fair use"), they'll ignore corporate ones too
Comment by LPisGood 8 hours ago
Comment by blibble 34 minutes ago
it's an incentive to pretend as if you're following the contract, which is not the same thing
Comment by protocolture 15 hours ago
Comment by sotrusting 14 hours ago
Comment by protocolture 11 hours ago
Comment by yieldcrv 14 hours ago
despite all 3 branches of the government disagreeing with them over and over again
Comment by sotrusting 14 hours ago
Comment by Oarch 14 hours ago
There may very well be clever techniques that don't require directly training on the users' data. Perhaps generating a parallel paraphrased corpus as they serve user queries - one which they CAN train on legally.
The amount of value unlocked by stealing practically ~everyone's lunch makes me not want to put that past anyone who's capable of implementing such a technology.
Comment by bdangubic 14 hours ago
Comment by GCUMstlyHarmls 15 hours ago
Also I wonder if the ToS covers "queries & interaction" vs "uploaded data" - I could imagine some tricky language in there that says we wont use your word document, but we may at some time use the queries you put against it, not as raw corpus but as a second layer examining what tools/workflows to expand/exploit.
Comment by danielheath 13 hours ago
There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.
Comment by phendrenad2 14 hours ago
Comment by Aurornis 14 hours ago
There are claims all through this thread that “AI companies” are probably doing bad things with enterprise customer data but nobody has provided a single source for the claim.
This has been a theme on HN. There was a thread a few weeks back where someone confidently claimed up and down the thread that Gemini’s terms of service allowed them to train on your company’s customer data, even though 30 seconds of searching leads to the exact docs that say otherwise. There is a lot of hearsay being spread as fact, but nobody actually linking to ToS or citing sections they’re talking about.
Comment by phendrenad2 6 minutes ago
Comment by matt-p 14 hours ago
Comment by kankerlijer 14 hours ago
Comment by gaigalas 15 hours ago
Comment by Oarch 14 hours ago
Many businesses simply couldn't afford to operate without such an edge.
Comment by Aurornis 15 hours ago
None of the mainstream paid services ingest operating data into their training sets. You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.
Comment by Retric 15 hours ago
“How can I control whether my data is used for model training?
If you are logged into Copilot with a Microsoft Account or other third-party authentication, you can control whether your conversations are used for training the generative AI models used in Copilot. Opting out will exclude your past, present, and future conversations from being used for training these AI models, unless you choose to opt back in. If you opt out, that change will be reflected throughout our systems within 30 days.” https://support.microsoft.com/en-us/topic/privacy-faq-for-mi...
At this point suggesting it has never and will her happen is wildly optimistic.
Comment by lwhi 15 hours ago
While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.
Comment by Aurornis 14 hours ago
Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.
Comment by lwhi 4 hours ago
However, let's say I record human interactions with my app; for example when a user accepts or rejects an AI sythesised answer.
This data can be used by me, to influence the behaviour of an LLM via RAG or by altering application behaviour.
It's not going to change the weighting of the model, but it would influence its behaviour.
Comment by AuthAuth 15 hours ago
Comment by Aurornis 14 hours ago
Comment by nerdponx 15 hours ago
Comment by Aurornis 14 hours ago
What? That’s literally my point: Enterprise agreements aren’t training on the data of their enterprise customers like the parent commenter claimed.
Comment by TheRoque 14 hours ago
Comment by Aurornis 14 hours ago
Comment by TheRoque 9 hours ago
"We will train new models using data from Free, Pro, and Max accounts when this setting is on (including when you use Claude Code from these accounts)."
Comment by doctorpangloss 14 hours ago
Comment by Aurornis 14 hours ago
Comment by doctorpangloss 11 hours ago
“You can use an LLM to paraphrase the incoming requests and save that. Never save the verbatim request. If they ask for all the request data we have, we tell them the truth, we don’t have it. If they ask for paraphrased data, we’d have no way of correlating it to their requests.”
“And what would you say, is this a 3 or a 5 or…”
Everything obvious happens. Look closely at the PII management agreements. Btw OpenAI won’t even sign them because they’re not sure if paraphrasing “counts.” Google will.
Comment by popalchemist 14 hours ago
Many of the top AI services use human feedback to continuously apply "reinforcement learning" after the initial deployment of a pre-trained model.
https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...
Comment by Aurornis 14 hours ago
Inference (what happens when you use an LLM as a customer) is separate from training.
Inference and training are separate processes. Using an LLM doesn’t train it. That’s not what RLHF means.
Comment by popalchemist 13 hours ago
The big companies - take Midjourney, or OpenAI, for example - take the feedback that is generated by users, and then apply it as part of the RLHF pass on the next model release, which happens every few months. That's why they have the terms in their TOS that allow them to do that.
Comment by agumonkey 14 hours ago
Comment by leptons 15 hours ago
Nothing is really preventing this though. AI companies have already proven they will ignore copyright and any other legal nuisance so they can train models.
Comment by lioeters 15 hours ago
Comment by Archelaos 15 hours ago
Comment by leptons 10 hours ago
Comment by tick_tock_tick 15 hours ago
Comment by leptons 10 hours ago
Comment by Aurornis 14 hours ago
The enterprise user agreement is preventing this.
Suggesting that AI companies will uniquely ignore the law or contracts is conspiracy theory thinking.
Comment by leptons 10 hours ago
"Meta Secretly Trained Its AI on a Notorious Piracy Database, Newly Unredacted Court Docs Reveal"
https://www.wired.com/story/new-documents-unredacted-meta-co...
They even admitted to using copyrighted material.
"‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says"
https://www.theguardian.com/technology/2024/jan/08/ai-tools-...
Comment by cess11 5 hours ago
https://www.vice.com/en/article/meta-says-the-2400-adult-mov...
Comment by fzeroracer 15 hours ago
It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this. And it keeps happening. Granted I'm unaware of cases of this occuring currently with professional AI services but it's basic security 101 that you should never let anything even have the remote opportunity to ingest data unless you don't care about the data.
Comment by james_marks 15 hours ago
This is objectively untrue? Giants swaths of enterprise software is based on establishing trust with approved vendors and systems.
Comment by Aurornis 14 hours ago
Do you have any citations or sources for this at all?
Comment by mulquin 15 hours ago
Comment by sotrusting 15 hours ago
Comment by protocolture 15 hours ago
Stealing implies the thing is gone, no longer accessible to the owner.
People aren't protected from copying in the same way. There are lots of valid exclusions, and building new non competing tools is a very common exclusion.
The big issue with the OpenAI case, is that they didn't pay for the books. Scanning them and using them for training is very much likely to be protected. Similar case with the old Nintendo bootloader.
The "Corpo Fascists" are buoyed by your support for the IP laws that have thus far supported them. If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
Comment by Oarch 14 hours ago
Isn't this a little simplistic?
If the value of something lies in its scarcity, then making it widely available has robbed the owner of a scarcity value which cannot be retrieved.
A win for consumers, perhaps, but a loss for the owner nonetheless.
Comment by protocolture 11 hours ago
Trying to group (Thing I dont like) with (Thing everyone doesnt like) is an old semantic trick that needs to be abolished. Taxonomy is good, if your arguments are good, you dont need emotively charged imprecise language.
Comment by Oarch 10 hours ago
Comment by sotrusting 14 hours ago
You know a position is indefensible when you equivocation fallacy this hard.
> The "Corpo Fascists" are buoyed by your support for the IP laws
You know a position is indefensible when you strawman this hard.
> If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.
Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
> Scanning them and using them for training is very much likely to be protected.
Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
Comment by protocolture 9 hours ago
No its quite defensible. And if that was equivocation, you can simply outline that you didn't mean to invoke the specific definition of stealing, but were just using it for its emotive value.
>You know a position is indefensible when you strawman this hard.
Its accurate. No one wants thes LLM guys stopped more than other big fascistic corporations, plenty of oppositional noise out there for you to educate yourself with.
>Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.
Cool, so if you agree all data should usable to create derivative works then I don't see what your complaint is.
>Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?
You invoked "strawman" and then hit me with this combo strawman/non sequitur? Cool move <1 day old account, really adds to your 0 credibility.
I literally pointed out they should have to pay the same access fee as anyone else for the data, but once obtained, should be able to use it any way. Reading the comment explains the comment.
Unless, charitably, you are suggesting that if a company is legally able to purchase content, and use it as training data, that somehow compels them to release that data for free themselves?
Weird take if true.
Comment by anshulbhide 1 hour ago
They are also on the basis of high gross margins of 80-90%. What happens to margins when you start including token variable costs?
Comment by windex 5 hours ago
What Iam seeing is that customers are delaying purchases of large expensive software. Prime example; SAP. ECC migrations to SaaS model RISE/GROW-PublicCloud are stalling, same with onprem S4 to RISE. I see a whole bunch of my customers instead go with retaining the core but modernize surround apps with intelligent custom apps without feature bloat. For now, SAP/oracle/whatever remains the system of record, the edges are going away. I guess the same is likely happening in other spaces.
This change is coming. Definitely. The current moats around SaaS will fall and the alternate ecosystem might not have moats at all.
Comment by mkagenius 1 hour ago
A tangent, I feel, again, unfortunately, the AI is going to divide society into people who can use the most powerful tools of AI vs those who will be only be using chatGPT at most (if at all).
I don't know why I keep worrying about these things. Is it pointless?
Comment by tovej 1 hour ago
For software engineering, it is useless unless you're writing snippets that already exist in the LLMs corpus.
Comment by rglover 52 minutes ago
If I give something like Sonnet the docs for my JS framework, it can write code "in it" just fine. It makes the occasional mistake, but if I provide proper context and planning up front, it can knock out some fairly impressive stuff (e.g., helping me to wire up a shipping/logistics dashboard for a new ecom business).
That said, this requires me policing the chat (preferred) vs. letting an agent loose. I think the latter is just opening your wallet to model providers but shrug.
Comment by throwaway613745 3 hours ago
Our customers ask for about AI features and it’s a constant struggle to explain to them that they just aren’t there yet.
Comment by gizmo 2 hours ago
- modest incremental gains in productivity
- society will remain mostly the same
- very few people will take advantage of the opportunities unlocked by AI
Comment by aszen 4 hours ago
This is inevitable, you can't rely on user licenses as a growth metric
Comment by mritchie712 1 hour ago
Comment by weitendorf 15 hours ago
Then this project lets you generate static sites from svelte components (matches protobuf structures) and markdown (documentation) and global template variables: https://github.com/accretional/statue
A lot of the SaaS ecosystem actually has rather simple domain logic and oftentimes doesn't even model data very well, or at least not in a way that matches their clients/users mental models or application logic. A lot of the value is in integrations, or the data/scaling, or the marketing and developer experience, or some kind of expertise in actually properly providing a simple interface to a complex solution.
So why not just create a compact universal representation of that? Because it's not so big a leap to go beyond eating SaaS to eating integrations, migration costs/bad moats, and the marketing/documentation/wrapper.
Comment by physicsguy 7 hours ago
When it comes to SaaS that's industry specific, I just don't see it'll be that much of a change any time soon. I've worked heavily in the engineering industry and the security requirements that get put upon anything are nuts. It is difficult to enter this market, ISO compliance is important, even being in the cloud is a barrier for some customers, and often the type that you have no choice but to contract with if you want to make a profit because of their outsized importance in the market.
When I speak to customers, they actually quite often have tried to build something themselves. Usually it's been an intern or grad trying to make their life easier. Often it's spreadsheet based, but some go as far as knocking up little Python web apps. In one company I interned in they had a shadow PHP app. They often have a small 'data science' team that has struggled to get access to the data they need. While they can often get something that does the barebones of the tasks, and can do it well, where they fall down is that they're vulnerable to security issues and can't navigate their internal company politics to get permission to host things in the cloud and make their life easy, plus they don't have the experience to know what's good practice. I don't see AI changing things that much in that.
Comment by gherkinnn 7 hours ago
As for Retool, I see the several waves of low/no-code products, the current one being LLMs, as repeated attempts to get non technical idea-guys to build their ideas. Where they all fail, and this is fundamental to the problem they're trying to solve, is that idea-guys' ideas crack when meeting reality. And neither Retool nor LLM fix that.
Comment by physicsguy 7 hours ago
This is definitely the hardest nut to crack. I worked on a product a while ago that needed to track maintenance periods on equipment quite carefully and then use that to filter data to provide insights about how it was performing. The 'user story' was light on detail. As we got into it, there were tons of questions about how to deal with source data that was often inconsistent or patchy, time zones became an issue because much of the data we received wasn't matched correctly to their local time (customers fault not ours) and our ideas guy just couldn't deal with it - "make it work" - when ultimately they were business questions that needed answering, not just pure software tasks. AI is so sycophantic it'd just go off and write something.
Comment by lil-lugger 8 hours ago
Comment by _pdp_ 15 hours ago
Spreadsheets! They are everywhere. In fact, they are so abundant these days that that many are spawned for a quick job and immediately discarded. In fact, the cost of having these spreadsheets is practically zero so in many cases one may find themselves having hundreds if not thousands of them sitting around with no indication to ever being deleted. Spreadsheets are also personal and annoying especially when forced upon you (since you did not make it yourself). Spreadsheets are also programming for non-programmers.
These new vibe-coded tools are essentially the new spreadsheets. They are useful,... for 5 minutes. They are also easily forgettable. They are also personal (for the person who made them) and hated (by everyone else). I have no doubt in my mind that organisation will start using more and more of these new types of software to automate repetitive tasks, improve existing processes and so on but ultimately, apart from perhaps just a few, none will replace existing, purpose-built systems.
Ultimately you can make your own pretty dashboard that nobody else will see or use because when the cost of production is so low your users will want to create their own version because they would think they could do better.
After all, how hard is to prompt harder then the previous person?
Also, do you really think that SaaS companies are not deploying AI themselves? It is practically an arms race: the non-expert plus some AI vs 10 specialist developers plus their AIs doing this all day long.
Who is going to have the upper-hand?
Comment by theshrike79 7 hours ago
Nobody knows how they work, very few have the skills or time to edit them or check them. People just use them for the convenience.
The magic sauce of Excel is that it's free an scriptable (programmable even). If you want a SaaS, you need to involve IT, Legal, your supervisors and it's a whole-ass thing of contracts and shit.
Excel? It's just there.
There are so many stories and anecdotes of people being in stupid data entry jobs, getting bored and finding out their whole job can be automated with a single smartly done Excel sheet. Then they press F9 once per day and do something else for the rest of the time =)
And just because, my main gripe about Excel: there are no unit tests or validators for it. There's no easy way to programmatically confirm that Cell C5 has the same formula as C875
If (when?) people start AI-coding the things they used to use Excel for, we might get some actual tests and validation to confirm what the code is supposed to actually happens.
Comment by matwood 8 hours ago
I’d also add a number of the vibe tools tech adjacent people on my team have made are used and liked by the team. Even engineering likes them because it frees up their time to work on customer facing things.
Comment by hyperpape 15 hours ago
The only named product was Retool.
Comment by linsomniac 14 hours ago
It took me no more than 2 hours to put those together. We didn't renew our TeamRetro
Comment by lelanthran 7 hours ago
Okay, so two hours with an LLM vs maybe 2.5 days without an LLM in the best-case scenario (i.e. LLMs gave you a 10x boost. I would expect it to be less than that though, like maybe a 2x boost) - it sounds like it was always pretty cheap to replace the SaaS, but the business didn't do it.
TBH, the arguments were never "It would take too long to do ourselves", it was always "but then we'd have to maintain it ourselves".
The place I am consulting at now just moved (i.e. a month ago) from their in-house built ticketing system ($0/m as it had not needed maintenance for over a year) to Jira (~$2k/m).
In this specific case, it was literally 0 hours to avoid paying the SaaS, and they still moved, because they wanted some modern features (billing for time on support calls, etc) and figured that rather than update their in-house system to add support hours costing (a day, at most) they may as well move to a system that already had it.
(Joke's on them though - the Jira solution for support hours costing is not near the level of granularity they need, even with multiple paid plugins).
Once again, companies aren't using SaaS because it's cheaper or quicker; they could already quickly replace their SaaS with in-house.
Comment by linsomniac 1 hour ago
I'm not a frontend guy, I'm an operations guy that sometimes does some backends. So it's likely a solid 2.5 days for me to build the pair of these, probably more I haven't touched Javascript in over a decade.
Comment by matwood 8 hours ago
Comment by linsomniac 1 hour ago
Comment by mikeocool 1 hour ago
Comment by nop_slide 13 hours ago
Comment by jonathanharel 8 hours ago
Comment by blazespin 15 hours ago
The problem is, nobody knows how much and how fast AI will improve or how much it will cost if it does.
That uncertainty alone is very problematic and I think is being underestimated in terms of its impact on everything it can potentially touch.
For now though, I've seen a wall form in benchmarks like swe-rebench and swebench pro. Greenfield is expanding, but maintenance is still a problem.
I think AI needs to get much better at maintenance before serious companies can choose build over buy for anything but the most trivial apps.
Comment by zkmon 4 hours ago
Automation is not new. What's new is the capabilities of the models that can be assigned with some of the workflow steps. If these steps were served by SaaS companies so far, they will still serve it. Maybe they make it much cheaper and use a model themselves.
Comment by kkarpkkarp 56 minutes ago
SaaS are swiss-army knife tools and you don't need all of this.
do you want to have a contact form on your site? Don't but the whole WP plugin for forms, ask AI for tiny, well-aligned plugin which will display form fields and process the input.
Do you need to A/B test your landing page? Just ask for another plugin which will switch page versions and track impressions.
No need for Hubspot when you have google sheets + AI-made plugin for this.
Comment by insane_dreamer 32 minutes ago
This is the key point. Sure, you don't have the chops to be able to replicate the SaaS product locally with Claude/Gemini, but you don't have to, because you're no trying to make a product that can handle N+1 workflows.
Comment by gijoeyguerra 15 hours ago
Comment by mikert89 14 hours ago
With AI, that equation is now changing. I anticipate that within 5 years autonomous coding agents will be able to rapidly and cheaply clone almost any existing software, while also providing hosting, operations, and support, all for a small fraction of the cost.
This will inevitably destroy many existing businesses. In order to survive, businesses will require strong network effects (e.g. marketplaces) or extremely deep data/compute moats. There will also be many new opportunities created by the very low cost of software. What could you build if it were possible to create software 1000x faster and cheaper?"
Paul Bucheit
Comment by hurturue 15 hours ago
Comment by agumonkey 14 hours ago
Comment by dangus 1 hour ago
> But my key takeaway would be that if your product is just a SQL wrapper on a billing system, you now have thousands of competitors: engineers at your customers with a spare Friday afternoon with an agent.
I think it’s pertinent to point out that a lot of SaaS products are aimed at businesses and individuals who don’t have engineers at all.
AI agents aren’t going to disrupt the SaaS market for software intended for businesses like small business retail where the owners and staff have minimal technical knowledge and zero extra time.
I also think that some SaaS products are so cheap that about an hour of effort is too much. Is it worth a month of effort to vibecode a Dropbox alternative? Even some pretty complicated software that is untouchable by agents and engineers’ side projects like the Microsoft 365 suite and Jira are priced at under $20/month/user.
On the other hand, some entrenched solutions that aren’t all that complicated could be finding themselves with new, smaller competitors.
Comment by jonathanharel 8 hours ago
Comment by dylanzhangdev 12 hours ago
Comment by malakai521 6 hours ago
Comment by dboreham 1 hour ago
Comment by shermantanktop 14 hours ago
Comment by raw_anon_1111 3 hours ago
The first company was a low margin business that sent home health care nurses to special needs kids and reimbursements came from Medicaid.
I was hired by the new director to modernize their aging in house Electronic Medical System built on FoxPro 1999 running on SQL Server 2000 - in 2016.
They had two “developers” who had been their for 10 and 20 years respectively who only knew Sql Server and FoxPro.
They also had some other software.
After doing some assessments of the situation, my report to the director and the CTO was that this company should not try to support a software development department and hire new people. Their margins are too small to be competitive or to keep people.
I suggested we outsource everything to other consulting companies - not staff augmentation. Let the consulting company do the entire implementation based on a Statement of Work.
The two “developers” role changed to “data analyst”. Even with AI I would have said the same thing today. Not every company needs to try to do software engineering. Every company does need to understand its data. [1]
The next company was a startup. I was adamant about blocking every developer who suggested any internal tool that we could get a well known SaaS to do or where AWS had a service that wasn’t firefly related to our product. To use the cliche - anything “that didn’t make the beer taste better”. My opinion wouldn’t have changed with AI.
The last thing I want is a bunch of bespoke internal vibe coded AI Slop that we have to support that is not in service to the product when we can find a reputable third party product.
And no that doesn’t mean I am going to trust some unknown one person SaaS company.
[1] 18 months into the job, I walked into the director’s office and told him, “let’s be honest, you all don’t need me anymore”. I purposefully put myself out of job. But boy did I have a story to tell during behavioral interviews at my next job at the startup and my interview for my job at BigTech after I left the startup.
Comment by Elizer0x0309 4 hours ago
Comment by Mistletoe 1 hour ago
Comment by NikolaNovak 15 hours ago
At the same time, to the core theme of the article - do any of us think a small sassy SaaS like Bingo card creator could take off now? :-)
https://training.kalzumeus.com/newsletters/archive/selling_s...
Comment by roncesvalles 8 hours ago
Secondly, the way this person describes "agents" is not rooted in reality:
>Agents don't leave. And with a well thought through AGENTS.md file, they can explain the codebase to anyone in the future.
>What's going to be difficult to predict is how quickly agents can move up the value chain. I'm assuming that agents can't manage complex database clusters - but I'm not sure that's going to be the case for much longer.
What in the world. And of course he's selling a course. This is the same business as those people sitting in Dubai selling $6000 options trading courses to anyone who believes their bullshit. The grifter economy around AI is in full swing like it was around blockchain/crypto in 2017-2020.
Comment by m-hodges 14 hours ago
Comment by firemelt 8 hours ago
Comment by rester324 8 hours ago
How wrong the author is about that! IMO As soon as the bubble bursts, which is already evident and imminent, these agents will raise their subscription fees to ridiculous amounts. And when that happens, entire organizations will abandon them and return to good ol' human engineering
Comment by theshrike79 7 hours ago
You're 100% right though, you shouldn't build your business on top of an AI service and not have a plan what to do _when_ it either goes away or just prices itself out. It's a massive bubble where money is just moving in a circle.
Comment by lelanthran 5 hours ago
It's worse than you state; a primary premise of the current AI expansion and investment, continuously repeated, is that computational resources are increasing for the same price-point.
So tell me, when we have these hardware gains in 5 years, why the fuck would I pay for fractionally better output on a cloud subscription when I could run on-prem GPUs for a fraction of what the actual subscription would be, giving me 24x7 agents working with no limit?
Current highly-invested AI providers are token providers - they are sitting at the bottom of the value-chain. They are trying to climb the value-chain, but the real value is in the models, and since all models have mostly converged on the same performance, running your own gives a very tiny drop in value.
Their problem is that, even at $200/m, it is not feasible to offer 24x7 access to the models - that max subscription has tiny limits: on that subscription you most definitely are not enabling even a single agent to run f/time.
You might get 3 hours per day, maybe. Best case, you get half a working day (4 hours), for five days. So double the subscription, and you're still looking at only a single simply agent that you can run f/time for $400/m.
For collaborating agents, you'll have, what, 4 - 5 diving into a large task in parallel? That's maybe $1600/m - $2000/m to solve tasks. You still have to pay the (increasingly incompetent) human operator to manage them.
That sort of workflow is what is being sold as the vision, and yet that will only be economically viable if you're self-hosting your own model.
Comment by jackschultz 15 hours ago
Summary is that for agents to work well they need clear vision into all things, and putting the data behind a gui or not well maintained CLI is a hinderance. Combined with how structured crud apps are an how the agents can for sure write good crud apps, no reason to not have your own. Wins all around with not paying for it, having a better understanding of processes, and letting agents handle workflows.
Comment by henning 15 hours ago
SaaS maintenance isn't about upgrading packages, it's about accountability and a point of contact when something breaks along with SLAs and contractual obligations. It isn't because building a kanban board app is hard. Someone else deals with provisioning, alerts, compliance, etc. and they are a real human who cannot hallucinate that the issue has been fixed when it hasn't. Depending on the contract and how it is breached, you can potentially take them to court and sue them to recover money lost as a result of their malpractice. None of that applies to a neural network that misreads the alert, does something completely wrong, then concludes the issue is fixed the way the latest models constantly do when I use them.
Comment by scotty79 15 hours ago
1. I had two text documents containing plain text to compare. One with minor edits (done by AI).
2. I wanted to see what AI changed in my text.
3. I tried the usual diff tools. They diffed line by line and result was terrible. I searched google for "text comparison tool but not line-based"
4. As second search result it found me https://www.diffchecker.com/ (It's a SaaS, right?)
5. Initially it did equally bad job but I noticed it had a switch "Real-time diff" which did exactly what I wanted.
6. I got curious what is this algorithm. So I asked Gemini with "Deep Research" mode: "The website https://www.diffchecker.com/ uses a diff algorithm they call real-time diff. It works really good for reformatted and corrected text documents. I'd like to know what is this algorithm and if there's any other software, preferably open-source that uses it."
7. As a first suggestion it listed diff-match-patch from Google. It had Python package.
8. I started Antigravity in a new folder, ran uv init. Then I prompted the following:
"Write a commandline tool that uses https://github.com/google/diff-match-patch/wiki/Language:-Py... to generate diff of two files and presents it as side by side comparison in generated html file."
[...]
"I installed the missing dependance for you. Please continue." - I noticed it doesn't use uv for installing dependencies so I interrupted and did it myself.
[...]
"This project uses uv. To run python code use
uv run python test_diff.py" - I noticed it still doesn't use uv for running the code so its testing fails.
[...]
"Semantic cleanup is important, please use it." - Things started to show up but it looked like linear diff. I noticed it had a call to semantic cleanup method commented out so I thought it might help if I push it in that direction.
[...]
"also display the complete, raw diff object below the table" - the display of the diff still didn't seem good so I got curious if it's the problem with the diffing code or the display code
[...]
"I don't see the contents of the object, just text {diffs}" - it made a silly mistake by outputting template variable instead of actual object.
[...]
"While comparing larger files 1.txt and 2.txt I notice that the diff is not very granular. Text changed just slightly but the diff looks like deleting nearly all the lines of the document, and inserting completely fresh ones. Can you force diff library to be more granular?
You seem to be doing the right thing https://github.com/google/diff-match-patch/wiki/Line-or-Word... but the outcome is not good.
Maybe there's some better matching algoritm in the library?" - it seemed that while on small tests that Antigravity made itself it worked decently but on the texts that I actually wanted to compare was still terrible although I've seen glimpses of hope because some spots were diffed more granularly. I inspected the code and it seemed to be doing character level diffing as per diff-match-patch example. While it processed this prompt I was searching for solution myself by clicking around diff-match-patch repo and demos. I found a potential solution by adjusting cleanup, but it actually solved the problem by itself by ditching the character level diffing (which I'm not sure I would have come up with at this point). Diffed object looked great but as I compared the result to https://www.diffchecker.com/ output it seemed that they did one minor thing about formatting better.
[...]
"Could you use rowspan so that rows on one side that are equivalent to multiple rows on the other side would have same height as the rows on the other side they are equivalent to?" - I felt very clumsily trying to phrase it and I wasn't sure if Antigravity will understand. But it did and executed perfectly.
I didn't have to revert a single prompt and interrupted just two times at the beginning.
After a while I added watch functionality with a single prompt:
"I'd like to add a -w (--watch) flag that will cause the program to keep running and monitor source files to diff and update the output diff file whenever they change."
[...]
So I basically went from having two very similar text files and knowing very little about diffing to knowing a bit more and having my own local tool that let's me compare texts in satisfying manner, with beautiful highlighting and formatting, that I can extend or modify however I like, that mirrors interesting part of the functionality of the best tool I found online. And all of that in the time span shorter than it took me to write this comment (at least the coding part was, I followed few wrong paths during my search for a bit).
My experience tells me that even if I could replicate what I did today (keeping motivated is an issue for me), it would most likely be multi-day project full of frustration and hunting small errors and venturing into wrong paths. Python isn't even my strongest language. Instead it was a pleasant and fun evening with occasional jaw drops and feeling so blessed that I live in SciFi times I read about as a kid (and adult).
Comment by giancarlostoro 15 hours ago
Comment by scotty79 9 hours ago
Comment by austinjp 15 hours ago
Um. I don't want to be That Guy (shouting at clouds, or at kids to get off my lawn or whatever) but ... what "usual diff" tools did you use? Because comparing two text files with minor edits is exactly what diff-related tools have excelled at for decades.
There is word-level diff, for example. Was that not good enough? Or delta [0] perhaps?
Comment by scotty79 9 hours ago
All weren't remotely close to what I wanted.
On first glance delta is also doing completely standard thing. But I can't rule out that there are some flags to coerce it to do the thing I wanted. But searching for tools and flag combinations is not that fun and success is not guaranteed. Also I found a (SaaS) tool and a flag there that did exactly what I wanted. Just decided to make my own local tool afterwards for better control. With an agent.
> Because comparing two text files with minor edits is exactly what diff-related tools have excelled at for decades.
True. And I ended up using this excellent Google diff-match-patch library for diffing. AI didn't write me a diffing algorithm. Just everything around it to make a convenient tool for me. Most tools are for source code and treat end lines as a very strong structural feature of compared texts. I needed something that doesn't care about end lines all that much and can show me which characters changed between one line of one file and five lines it was split into in the second file.
Comment by MangoToupe 15 hours ago
Oh, child.... building is easy. Coordinating maintenance of the tool across a non-technical team is hell.
Comment by toomuchtodo 15 hours ago
Comment by lwhi 15 hours ago
Corporations think in terms of risk.
Second only to providing a useful function, a successful SaaS app will have been built to mitigate risk well.
It's not going to be easy to meet these requirements without prior knowledge and experience.
Comment by zenethian 9 hours ago
Comment by innagadadavida 12 hours ago
Comment by yieldcrv 14 hours ago
Comment by yellow_lead 15 hours ago
> The signals I'm seeing
Here are the signals:
> If I want an internal dashboard...
> If I need to re-encode videos...
> This is even more pronounced for less pure software development tasks. For example, I've had Gemini 3 produce really high quality UI/UX mockups and wireframes
> people really questioning renewal quotes from larger "enterprise" SaaS companies
Who are "people"?
Comment by ares623 15 hours ago
Comment by samdoesnothing 15 hours ago
Is the author a competent UX designer who can actually judge the quality of the UX and mockups?
> I write about web development, AI tooling, performance optimization, and building better software. I also teach workshops on AI development for engineering teams. I've worked on dozens of enterprise software projects and enjoy the intersection between commercial success and pragmatic technical excellence.
Nope.
Comment by oldandboring 3 hours ago
Comment by bgwalter 15 hours ago
Comment by MyFirstSass 14 hours ago
It's not the hackernews i knew even 3 years ago anymore and i'm seriously close to just ditching the site after 15+ years of use.
I use AI heavily but everyday there's crazy optimistic almost manic posts about how AI is going to take over various sectors that are completely ludicrous - and they are all filled with comments from bizarrely optimistic people that have seemingly no knowledge of how software is actually run or built, ie. it's the human organisational, research and management elements that are the hard parts, something AI can't do in any shape or form at the moment for any complex or even small company.
Comment by bybuybye 8 hours ago
Comment by Starlevel004 15 hours ago