Multi-Agentic Software Development Is a Distributed Systems Problem
Posted by tie-in 7 days ago
Comments
Comment by mrothroc 6 days ago
My approach has been more pragmatic than theoretical: I break work into sequential stages (plan, design, code) with verification gates. Each gate has deterministic checks (compile, lint, etc) and an agentic reviewer for qualitative assessment.
Collectively, this looks like a distributed system. The artifacts reflect the shared state.
The author's point about external validation converting misinterpretations into detectable failures is exactly what I've found empirically. You can't make the agent reliable on its own, but you can make the protocol reliable by checking at every boundary.
The deterministic gates provide a hard floor of guarantees. The agentic gates provide soft probabilistic assertions.
I wrote up the data and the framework I use: https://michael.roth.rocks/research/trust-topology/
Comment by peterbell_nyc 6 days ago
Small model and (where still required) human in the loop steps for deterministic workflows can solve a surprisingly large number of problems and don't depend on the models to be consistent or not to fail.
Just invest heavily in adversarial agents and quality gates and apply transforms on intermediate artifacts that can be validated for some dimensions of quality to minimize drift.
Comment by mrothroc 5 days ago
It's amazing the power a simple workflow with automatic gate enforcement brings to agenting coding.
Comment by binyu 6 days ago
Example: Synchronization in naturally async environments, consensus, failure-safe system, etc.
Comment by mrothroc 6 days ago
But I think the coordination problem is subtler than version control implies. In the (plan, design, code) pipeline they aren't collaborating on the same artifact. They're producing different artifacts that are all expressions of the same intent in different spaces: a plan in natural language, a design in a structured spec, code in a formal language.
Different artifacts which are different projections in different Chomsky levels but all from the same thing: user intent.
The coordination challenge is keeping these consistent with each other as each stage transforms the prior projection into the new one. That's where the gates earn their place: they verify that each transformation preserves the intent from the previous stage.
Comment by bmitc 6 days ago
Comment by mrothroc 5 days ago
It's grown over time to be a full MCP and CLI with stages and gates defined in YAML. I was thinking about open sourcing it but since the code grew organically I would need to do extensive cleanup to make it presentable.
But I do walk through the process on page 9: https://michael.roth.rocks/research/trust-topology/#9
Comment by canarias_mate 5 days ago
Comment by mccoyb 6 days ago
The post acts like agents are a highly complex but well-specified deterministic function. Perhaps, under certain temperature limits, this is approximately true ... but that's a serious restriction and glossed over.
For instance, perhaps the most striking constraint about FLP is that it is about deterministic consensus ... the post glazes over this:
> establishes a fundamental impossibility result dictating consensus in any asynchronous distributed system (yes! that includes us).
No, not any asynchronous distributed system, that might not include us. For instance, Ben-Or (1983, https://dl.acm.org/doi/10.1145/800221.806707) (as a counterexample to the adversary in FLP) essentially says "if you're stuck, flip a coin". There's significant work studying randomized consensus (yes, multi-agents are randomized consensus algorithms): https://www.sciencedirect.com/science/article/abs/pii/S01966...
Now, in Ben-Or, the coins have to be independent sources of randomness, and that's obviously not true in the multi-agent case.
But it's very clear that the language in this post seems to be arguing that these results apply without understanding possibly the most fundamental fact of agents: they are probability distributions -- inherently, they are stochastic creatures.
Difficult to take seriously without a more rigorous justification.
Comment by gopiandcode 6 days ago
At the lowest level of abstraction, LLMs are just matrix multiplication. Deterministic functions of their inputs. Of course, we can argue on the details and specifics of how the peculiarities of inference in practice lead to non-deterministic behaviours but now our model is being complicated by vague aspects of reality.
One convenient way of sidestepping these is to model them as random functions, sure. I wouldn't go as far to say they are "inherently stochastic creatures". Maybe that's the case, but you haven't really given substantial evidence to justify that claim.
At a higher level of abstraction, one possible model of llms is as deterministic functions of their inputs again, but now as functions of token streams or higher abstractions like sentences rather than the underlying matrix multiplication. In this case again we expect llms to produce roughly consistent outputs given the same prompt. In this case, again, we can apply deterministic theorems.
I guess my central claim is that there hasn't been a salient argument made as to why the randomness here is relevant for consensus. Maybe the models exhibit some variability in their output, but in practice does this substantially change how they approach consensus? Can we model this as artefacts of how they are initialised rather than some inherent stochasticity? Why not? It feels like randomness is being introduced here as a sort of magic "get out of jail" free card here.
Just my two cents I suppose.
Comment by mccoyb 6 days ago
There's no peculiarity to discuss, that's how they work. That's how they are trained (the loss is defined by probabilistic density computations), that's how inference works, etc.
> I guess my central claim is that there hasn't been a salient argument made as to why the randomness here is relevant for consensus. Maybe the models exhibit some variability in their output, but in practice does this substantially change how they approach consensus? Can we model this as artefacts of how they are initialised rather than some inherent stochasticity? Why not? It feels like randomness is being introduced here as a sort of magic "get out of jail" free card here.
I'm really surprised to hear this given the content of the post. The claims in the post are quite strong, yet here I need to give a counterargument to why the claim about consensus applying to pseudorandom processes is relevant?
I don't think it's necessary to furnish a counterexample when pointing out when a formal claim is overreaching. It's not clear what the results are in this case! So it feels premature to claim that results cover a wider array of things than shown?
For instance, this is a strong claim:
> it means that in any multi-agentic system, irrespective of how smart the agents are, they will never be able to guarantee that they are able to do both at the same time: > > Be Safe - i.e produce well formed software satisfying the user's specification. > Be Live - i.e always reach consensus on the final software module.
I'm confused as to the stance, we're either hand-waving, or we're not -- so which is it?
Comment by mccoyb 6 days ago
I just came away from the read thinking that this post was pointing to something very strong and was a bit irked to find that the state of results was more subtle than the post conveys it.
Comment by gopiandcode 6 days ago
Comment by jimmypk 6 days ago
What isn't solved there is semantic idempotency. Even if a failed agent activity retries correctly at the infrastructure layer, the LLM re-invocation produces a different output. This is why the point about tests converting byzantine failures into crash failures is load-bearing: without external validation gates between activities, you've pushed retry logic onto Temporal but left the byzantine inconsistency problem unsolved. The practical implication is that the value of the test suite in an agentic pipeline scales superlinearly, not just as correctness assurance but as the mechanism that collapses the harder byzantine failure model back into the weaker FLP one.
Comment by tomwheeler 6 days ago
In Temporal, an Activity won't be executed again provided that Activity completion is recorded to the event history.
If the application crashes, its state is recreated using results from the history (i.e., the ones from the invocation that happened prior to the crash). Thus, the non-deterministic nature of LLM calls doesn't affect the application because each effectively only happens once.
Comment by xer 6 days ago
However, AI agents don't share these problems in the classical sense. Building agents is about context attention, relevance, and information density inside a single ordered buffer. The distributed part is creating an orchestrator that manages these things. At noetive.io we currently work on the context relevance part with our contextual broker Semantik.
Comment by ramraj07 6 days ago
Comment by xer 6 days ago
Comment by falcor84 7 days ago
Comment by pydry 6 days ago
If that could have been achieved it would have been very profitable, too. There's no shortage of cheap, motivated interns/3rd world devs and the executive class prefer to rely on disposable resources even when it costs more overall.
The net result was always the opposite though - one or two juniors on a leash could be productive but more than that and it always caused more problems than it solved.
Seeing the same problem with agents. Multi agent orchestration seems like a scam to manufacture demand for tokens.
Comment by falcor84 6 days ago
So to be clear, I'm not advising anyone to change their current token consumption habit. I'm just saying that it's silly to apply math to prove the impossibility of something we can literally see around us. It's like a mathematical proof that water isn't really wet.
Comment by bloppe 7 days ago
Comment by EagnaIonat 7 days ago
You can certainly prove that mathematically, and giving that proof to an LLM it will give you the correct answer. Which is a prompting technique btw to improve accuracy of an LLMs results.
Although the more recent LLMs have built in approaches to answer this question.
Gemma4 for example:
-----
The user is asking for the number of 'r's in the word "strawberry".
* S
* T
* *R* (1)
* A
* W
* B
* E
* *R* (2)
* *R* (3)
* Y
There are 3 'r's.
-----
GPT-OSS-120b example.
-----
The user asks: "How many r's in strawberry." This is a straightforward counting problem: count the letter 'r' in the word "strawberry". The word "strawberry" contains letters: s t r a w b e r r y. Count r's: there are three 'r's? Let's check: s(1) t(2) r(3) a(4) w(5) b(6) e(7) r(8) r(9) y(10). Actually the word is "strawberry". Let's write out: s, t, r, a, w, b, e, r, r, y. So there are three r's: at positions 3, 8, 9. Yes, three r's. So answer: 3.
But we may also note that "r" appears three times.
Thus answer: 3.
We can provide a short answer.
Thus final: There are three r's in "strawberry".
----
Comment by kang 6 days ago
Comment by EagnaIonat 6 days ago
Comment by Tade0 6 days ago
Comment by falcor84 6 days ago
Comment by Tade0 6 days ago
Comment by EagnaIonat 6 days ago
Even with the weights the extra context allows it to move to the correct space.
Much the same as humans there are terms that are meaningless without knowing the context.
Comment by kang 6 days ago
Comment by EagnaIonat 6 days ago
Comment by tacotime 7 days ago
Comment by SamLeBarbare 6 days ago
Good architecture, actor models, and collaboration patterns do not emerge magically from “more agents”.
Maybe what’s missing is the architect’s role.
Comment by sabside 6 days ago
Comment by pjmlp 6 days ago
Comment by 21asdffdsa12 6 days ago
Comment by airstrike 6 days ago
The path forward is always one that starts from the assumption that it will go wrong in all those different ways, and then builds from there
Comment by 21asdffdsa12 6 days ago
Comment by airstrike 6 days ago
Comment by 21asdffdsa12 6 days ago
Comment by don_esteban 6 days ago
Comment by siliconc0w 6 days ago
I wrote an article on this if you're interested: https://x.com/siliconcow/status/2035373293893718117
Comment by zarathustreal 6 days ago
Comment by zackham 6 days ago
Comment by lifeisstillgood 7 days ago
This might be obvious to everyone but it’s a nice way to me to view it (sort of restating the non-waterfall (agile?) approach to specification discovery)
Ie waterfall design without coding is too under specified, hence the agile waterfall of using code iteratively to find an exact specification
Comment by jbergqvist 7 days ago
Comment by timinou 6 days ago
At the end, in both cases, it's a back and forth with an LLM, and every request has its own lifecycle. So it's unfortunately at least a networked systems problem. I think your point works with infinite context window and one-shot ting the whole repo every time... Maybe quantum LLM models will enable that
Comment by SpicyLemonZest 6 days ago
Comment by gopiandcode 6 days ago
Comment by porknbeans00 5 days ago
Comment by wnbhr 6 days ago
Comment by cooloo 6 days ago
Comment by enoonge 6 days ago
Agreed on the main claim that multi-agentic software development is a distributed systems problem, however I think the distributed consensus point is not the tightest current bottleneck in practice.
The article mentions the Partial Synchronous model (DLS) but doesn't develop it, and that's the usual escape hatch against FLP. In practical agentic workflows it's already showing up as looped improvement cycles bounded by diminishing returns. Each iteration is effectively a round, and each agent's output in that round is a potentially faulty proposal the next round refines. Painful in cost yes, but manageable. If models continue to improve at current rates, I think it's reasonable to assume the number of cycles will decrease.
The more interesting objection is that "agent misinterprets prompt" isn't really byzantine. The 3f+1 bound assumes independent faults, but LLM agents share weights, training data, and priors. When a prompt is ambiguous they don't drift in random directions, they drift the same way together. That isn't majority vote failing loudly, it's consensus succeeding on a shared bias, which is arguably worse.
Comment by threethirtytwo 6 days ago
This is not true. In theory if the agent is smart enough it out thinks your ideas and builds the solution around itself so that it can escape.
Comment by josefrichter 6 days ago
Comment by vedant_awasthi 6 days ago
Comment by darthvaden 6 days ago
Comment by yangshi07 6 days ago
Comment by pyinstallwoes 6 days ago
Comment by pavas 6 days ago
Comment by digdatechAGI 4 hours ago
Comment by tuo-lei 6 days ago
Comment by agent-kay 6 days ago
Comment by R00mi 6 days ago
Comment by vampiregrey 6 days ago
Comment by maxothex 6 days ago
Comment by enesz 7 days ago
Comment by socketcluster 6 days ago
It's crazy how good coding agents have become. Sometimes I barely even need to read the code because it's so reliable and I've developed a kind of sense for when I can trust it.
It boggles my mind how accurate it is when you give it the full necessary context. It's more accurate than any living being could possibly be. It's like it's pulling the optimal code directly from the fabric of the universe.
It's kind of scary to think that there might be AI as capable as this applied to things besides next token prediction... Such AI could probably exert an extreme degree of control over society and over individual minds.
I understand why people think we live in a simulation. It feels like the capability is there.
Comment by rdevilla 6 days ago
Comment by socketcluster 6 days ago
Like for complex bugs in messy projects, it can get stuck and waste thousands of tokens but if your code is clean and you're just building out features. It's basically bug free, first shot. The bugs are more like missing edge cases but it can fix those quickly.
Comment by rdevilla 6 days ago