OpenAI's In-House Data Agent
Posted by meetpateltech 5 hours ago
Comments
Comment by laser 3 hours ago
[1] https://images.ctfassets.net/kftzwdyauwt9/2tMhL5Www2vA6I62DV...
“What was ChatGPT Image Gen logged-in DAU for the last 30 days? Worked for 1m 22s > ChatGPT WAU on October 6, 2025 (rounded to nearest 100M): = 800M ChatGPT WAU on the last DevDay 2023 (Nov 6, 2023; rounded to nearest 100M): = 100M Mini comparison (using the rounded figures only): • Change: = +700M WAU • Multiple: = 8x higher on 2025-10-06 vs 2023-11-06 (WAU here is the standard ChatGPT WAU as-of the reporting date; I'm only sharing the values rounded to the nearest 100M, per your request.)”
Comment by 3rodents 2 hours ago
Desktop, correct prompt: https://images.ctfassets.net/kftzwdyauwt9/5EoAd2fIvVRf8V51LN...
Mobile, wrong prompt: https://images.ctfassets.net/kftzwdyauwt9/2tMhL5Www2vA6I62DV...
Comment by tillvz 3 hours ago
We've been building natural language analytics at Veezoo (https://www.veezoo.com/) for 10 years, and what we find is that straight Text-to-SQL doesn't scale. If AI writes SQL directly, you're building on a probabilistic foundation. When a CFO asks for revenue the number can't just be correct 99% of times. Also you can't get the CFO to read SQL to verify.
We're solving that with an abstraction layer (Knowledge Graph) in between. AI translates natural language to a semantic query language, which then compiles to SQL deterministically.
At the same time you can translate the semantic query deterministically back into an explanation for the business user, so they can easily verify if the result matches their intent.
Business logic lives in the Knowledge Graph and the compiler ensures every query adheres to it 100%, every time. No AI is involved in that step.
Veezoo Architecture: https://docs.veezoo.com/veezoo/architecture-overview
Comment by kburman 2 hours ago
I'm curious how this approach manages cardinality explosion? Also, how do you handle cases where a user asks for data that requires running multiple queries, specifically where each query depends on the results of the previous one?
Comment by tillvz 2 hours ago
The Knowledge Graph explicitly models cardinality and relationships between entities. The compiler uses that to generate SQL that handles it correctly, using e.g. DISTINCT
> Also, how do you handle cases where a user asks for data that requires running multiple queries, specifically where each query depends on the results of the previous one?
Veezoo can generate adaptive plans, so it can decide to wait for a database query to return results before continuing
Comment by kburman 2 hours ago
On the adaptive plans, Is that execution logic handled entirely by your deterministic compiler, or does it loop back to the LLM to interpret the intermediate results?
Comment by tillvz 2 hours ago
There are both options. You can index them as entities [1] within Veezoo and keep the mapping automatically synchronized with the database. Or decide to not index them, which will make Veezoo e.g. attempt answering the question using string search in SQL.
>On the adaptive plans, Is that execution logic handled entirely by your deterministic compiler, or does it loop back to the LLM to interpret the intermediate results?
The plan is done entirely by the LLM. The VQL steps (i.e. fetching answers from the database) within the plan is where the compiler kicks in.
Comment by Leynos 3 hours ago
(Prompts need to be version controlled too, of course)
Comment by tillvz 3 hours ago
The fundamental artifact is VQL (Veezoo Query Language), which queries against a Knowledge Graph containing your business data model, things like your "Revenue" measure.
A query might look like this:
var order from kb.Order
date_in(order.Order_Date, date("#today"))
var retRevenue = kb.Order.Revenue(order)
select(retRevenue)
If the business decides to change how revenue is computed, the VQL stays valid but compiles to different SQL. At the same time Veezoo can test that with your knowledge graph change that you are not breaking anyones dashboard and even apply evolutions if needed
VQL: https://docs.veezoo.com/vkl/kb-layer/vql/
Evolutions: https://docs.veezoo.com/vkl/evolutions/
The Knowledge Graph itself is version controlled, so the data team can trace every change.
Comment by maxchehab 3 hours ago
We're building something similar and found that no matter how good the agent loop is, you still need "canonical metrics" that are human-curated. Otherwise non-technical users (marketing, product managers) are playing a guessing game with high-stakes decisions, and they can't verify the SQL themselves.
Our approach: 1. We control the data pipeline and work with a discrete set of data sources where schemas are consistent across customers 2. We benchmark extensively so the agent uses a verified metric when one exists, falls back to raw SQL when it doesn't, and captures those gaps as "opportunities" for human review
Over time, most queries hit canonical metrics. The agent becomes less of a SQL generator and more of a smart router from user intent -> verified metric.
The "Moving fast without breaking trust" section resonates, their eval system with golden SQL is essentially the same insight: you need ground truth to catch drift.
Wrote about the tradeoffs here: https://www.graphed.com/blog/update-2
Comment by data-ottawa 3 hours ago
If there are multiple paths or perceived paths to an answer, you’ll get two answers. Plus, LLMs like to create pointless “xyz_index” metrics that are not standard, clear, or useful. Yet i see users just go “that sounds right” and run with it.
Comment by maxchehab 2 hours ago
It only works because all of the data looks the same between customers (we manage ad platform, email, funnel data).
So if we make an “email open rate” metric, that’ll amortize to other customers.
Comment by sjsishah 4 hours ago
Mix them together and you’re already deep in make believe land, so letting AI take over step 1 seems like a perfect fit.
I was hoping to read this article and be surprised by how OpenAI was able to solve the reliability problem, but alas.
Comment by hobs 3 hours ago
layer 0 - how you stored the data was wrong.
layer -1 - your understanding of modeling the behavior was wrong before you ever created a table.
layer -2 - your fundamental business process was wrong and all your information is lies.
This is why instead of a central source of truth I call it the central source of lies.
Comment by onion2k 3 hours ago
Specifically, how good a company's data is will determine how effectively it can leverage AI in the future. The public data is pretty much mined to exhaustion, and the next big data source will be in-house documentation, code repos, data lakes, etc. If you work for a company where that's been built, maintained, and organised then the effectiveness of AI is going to be mind-blowing. Companies that have maintained good docs be able to build new things, maintain old things, and migrate things to cheaper modern stacks easily. That will lead to being able to move fast and deploy new AI-driven services easily and cheaply. Revenue will follow.
Conversely, at companies where documentation and code organisation have been historically poor, AI will struggle. Leaders will see it as a benefit, and be baffled at why their company can't realise the value of it. They'll quickly blame developers for not being able to use it, and that'll lead to people's growth stagnating or possibly layoffs. Eventually competitors will eat the company's lunch because they'll just be able to move on opportunities much faster.
I've resolved that in any future job hunt I'm going to make asking about docs, data, and repos a priority...
Comment by mritchie712 3 hours ago
We give you all of this in 5 minutes at https://www.definite.app/.
And I mean all of it. You don't need Spark or Snowflake. We give you a datalake, pipelines to get data in, semantic layer and a data agent in one app.
The agent is kind of the easy / fun part. Getting the data infrastructure right so the agent is useful is the hard part.
i.e. if the agent has low agency (e.g. can only write SQL in Snowflake) and can't add a new data source or update transformation logic, it's not going to be terribly effective. Our agent can obviously write SQL, but it can also manage the underlying infra, which has been a huge unlock for us.
Comment by 0xferruccio 4 hours ago
Our chief engineer Wade gave an awesome demo to Claire Vo some months back here: https://www.youtube.com/watch?v=9Q9Yrj2RTkg
I use this basically every day asking all sorts of questions
Comment by qsort 3 hours ago
When working on data systems you quickly realize that often how the question was answered (how the metric is defined, what data was taken into account and so on) is just as important as the answer.
Comment by htrp 3 hours ago
Comment by exogenousdata 3 hours ago
Comment by spiderfarmer 3 hours ago