Amazon's AI boom is creating mess of duplicate tools and data inside the company
Posted by cebert 1 day ago
Comments
Comment by cowartc 23 hours ago
Comment by recursivecaveat 18 hours ago
Comment by DonsDiscountGas 22 hours ago
Comment by acdha 20 hours ago
Comment by cdrnsf 21 hours ago
Comment by andriy_koval 19 hours ago
Comment by belval 18 hours ago
I am a big believer in Amazon's "1 > 2 > 0" for "one perfect solution is better than two, but both are much better than no solutions". It's also misunderstanding how Amazon works. If you want to move fast, you can't wait for an SVP 6 levels above you to approve every effort. Instead you build something and then you run it up the chain to have it be adopted at team/org-level.
Comment by lqstuart 20 hours ago
We have made no revenue, let alone profit from any AI feature. However, some curiously under qualified people have been hired into new “AI” themed roles with seven or eight figure comp, and we seem to be preparing for major layoffs in the next 30-60 days. Presumably those new roles will be safe.
Comment by CharlieDigital 1 day ago
Sometimes, I'll catch it writing the same logic 2x in the same PR (recent example: conversion of MIME type to extension for images). At our scale, it is still possible to catch this and have these pulled out or use existing ones.
I've been mulling whether microservices make more sense now as isolation boundaries for teams. If a team duplicates a capability internally within that boundary, is it a big deal? Not clear to me.
Comment by appplication 23 hours ago
For example if AI generates 2x of a utility function that does the same thing, yes that is not an ideal, but is also fairly minor in terms of tech debt. I think as long as all behaviors introduced by new code are comprehensively tested, it becomes less significant that there can be some level of code duplication.
It also is something that can be periodically caught and cleaned up fairly easily by an agent tasked to look for it as part of review and/or regular sessions to reduce tech debt.
A lot of this is adapting to new normal for me, and there is some level of discomfort here. But I think: if I were the director of an engineering org and I learned different teams under me had a number of duplicated utility functions (or even competing services in the same niche), would this bother me? Would it be a priority for me to fix? I think I’d prefer it weren’t so, but probably would not rise to the level of needing specific prioritization unless it impacted velocity and/or stability.
Comment by CharlieDigital 20 hours ago
Majority of the cases, I think this is harmless. In C#, for example, we have agents repeatedly generating switch expressions for file extensions to MIME-type.
This is harmless since there's no business logic.
But we also have some cases where phone number processing gets semi-duplicated. Here, it's a bit more nebulous since it looked like it was isolated, but still had some overlapping logic. What if we change vendors in the future and we need a different format? Have to find all the places it occurs and there's now no single entry point or specific pattern to search for.
Agents themselves may or may not find all the cases since it is using `grep` and doesn't have semantic understanding of the code. What if we ask for it to refactor and its `grep` misses some pattern?
Still uneasy, but yet to feel the pain on this one.
Comment by mikodin 22 hours ago
We still run into the same issues that this brings about in the first place, AI or no AI. When requirements change will it update both functions? If it is rewriting them because it didn't see it existed in the first place, probably not. And there will likely be slight variations in function / components names, so it wouldn't be a clean grep to make the changes.
It may not impact velocity or stability in the exact moment, but in 6 months or a year - it likely will, the classic trope of tech debt.
I have no solution for this, it's definitely a tricky balance and one that we've been struggling with human written code since the dawn.
Comment by odux 21 hours ago
Comment by minimally 1 day ago
Comment by LeCompteSftware 1 day ago
With the obvious preface of "thoughtlessly adding." Of course it's not a real law, it's a tongue-in-cheek observation about how things have tended to go wrong empirically, and highlights the unique complexity of adding manpower to software vs. physical industry. Regardless, it has been endlessly frustrating for people to push AI/agentic development by highlighting short-term productivity, without making any real attempt to reconcile with these serious long-term technical management problems. Just apply a thick daub of Claude concealer, and ship.
Maybe people are right about the short-term productivity making it worthwhile. I don't know, and you don't either: not enough time has elapsed to falsify anything. But it sure seems like Fred Brooks was correct about the long-term technical management problems: adding Claudes to a late C compiler makes it later.
The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
https://www.anthropic.com/engineering/building-c-compilerComment by dannersy 22 hours ago
Comment by socratic_weeb 21 hours ago
Comment by andriy_koval 18 hours ago