Show HN: ShapedQL – A SQL engine for multi-stage ranking and RAG
Posted by tullie 2 days ago
Hi HN,
I’m Tullie, founder of Shaped. Previously, I was a researcher at Meta AI, worked on ranking for Instagram Reels, and was a contributor to PyTorch Lightning.
We built ShapedQL because we noticed that while retrieval (finding 1,000 items) has been commoditized by vector DBs, ranking (finding the best 10 items) is still an infrastructure problem.
To build a decent for you feed or a RAG system with long-term memory, you usually have to put together a vector DB (Pinecone/Milvus), a feature store (Redis), an inference service, and thousands of lines of Python to handle business logic and reranking.
We built an engine that consolidates this into a single SQL dialect. It compiles declarative queries into high-performance, multi-stage ranking pipelines.
HOW IT WORKS:
Instead of just SELECT , ShapedQL operates in four stages native to recommendation systems:
RETRIEVE: Fetch candidates via Hybrid Search (Keywords + Vectors) or Collaborative Filtering. FILTER: Apply hard constraints (e.g., "inventory > 0"). SCORE: Rank results using real-time models (e.g., p(click) or p(relevance)). REORDER: Apply diversity logic so your Agent/User doesn’t see 10 nearly identical results.
THE SYNTAX: Here is what a RAG query looks like. This replaces about 500 lines of standard Python/LangChain code:
SELECT item_id, description, price
FROM
-- Retrieval: Hybrid search across multiple indexes
search_flights("$param.user_prompt", "$param.context"),
search_hotels("$param.user_prompt", "$param.context")
WHERE -- Filtering: Hard business constraints
price <= "$param.budget" AND is_available("$param.dates")
ORDER BY -- Scoring: Real-time reranking (Personalization + Relevance)
0.5 * preference_score(user, item) +
0.3 * relevance_score(item, "$param.user_prompt")
LIMIT 20If you don’t like SQL, you can also use our Python and Typescript SDKs. I’d love to know what you think of the syntax and the abstraction layer!
Comments
Comment by data_ders 4 hours ago
conceptual questions:
1) why did you pick SQL? to increase the Total Addressable Userbase with the thinking that a SQL API means more people can use it than those who know Python or Typescript?
2) What isn't or will never be supported by this relational model? what are the constraints? Clickhouse comes to mind w/ it's intentionally imposed limitations on JOINs
3) databases are historically the stickiest products, but even today SQL dialects are sticky because of how closely tied they are to the query engine. why do you think users will adopt not only a new dialect but a new engine? Especially given that the major DWH vendors have been relentlessly competing to add AI search vector functionality into their products?
4) mindsdb comes to mind as something similar that's been in the market for a while but I don't hear it come up often. what makes you different?
playground feedback: 1) why are there no examples that: a) use `JOIN` (that `,` is unhinged syntax imho for an implicit join) b) don't use `*` (it's cool that there's actual numbers!)
2) i kinda get why the search results defaults to a UI, but as a SQL person I first wanted to know what columns exist. I was happy to see "raw table" was available but it took me a while to find it. might be have raw table and UI output visible at the same time with clear instructions on what columns the query requires to populate the UI
Comment by tullie 4 hours ago
1) So we do actually have a python and typescript API, it's just the console web experience is SQL only as it feels the best for that kind of experience. The most important thing though is that it's declarative. This helps keep things relatively simple despite all the configuration complexity, and is also the best for LLMs/agents as they can iterate on the syntax without doc context.
2) Yeah exactly, joins is something we can't do at the moment, and i'm not sure the exact solution their honestly. Under the hood most of Shaped's offline data is built around Clickhouse, and we do want to build a more standard SQL interface just so you can do ad-hoc, analytical queries. We're currently trying to work if we should integrate it more directly with ShapedQL or just keep it as a separate interface (e.g. a ShapedQL tab vs a Clickhouse SQL tab).
3) We didn't really want to create a new SQL dialect, or really a new database. The problem is none of the current databases are well suited for search and recommendations, where you need extremely low latency, scalable, fault-tolerance, but also the ability to query based on a user or session context. One of the big things here is that because Shaped stores the user interactions alongside the item catalog, we can encode real-time vectors based on those interactions all in an embedding query service. I don't think that's possible with any other database.
4) I haven't looked into mindsdb too much, but this is a good reminder for me to deep dive into it later today. From taking a quick pass on it, my guess is the biggest difference is that we're built specifically for real-time search, recommendations and RAG, and that means latency, and ability to integrate click-through-rate models and things becomes vital.
Thanks so much for the playground syntax, have some follow up questions but i'm going to pm you if that's okay. Agreed on the being able to see which columns exist.
Comment by JacobiX 4 hours ago
On Instagram this is a good thing, but here the example is hotel and flight search, where a more deterministic result is preferable.
In the retrieve → filter stage, using predicate pushdown may be more performant: first filter using hard constraints, then apply hybrid search ?
Comment by tullie 4 hours ago
All of the retrievers do support pre-filtering, you just add the where clause within the retriever function. We're working on more query optimization to make this automatic also.
Comment by froh42 2 hours ago
By exposing my database to services somewhere else in the network. Oh and somewhere else is the US.
Fat chance in hell I can anyone in my company look at that or even think about legally applying it with some serious data. (I'm in EU. Yes, a lot of people and companies use US services. Currently it looks like NONE of these can legally do.)
It looks interesting, but it needs a on premise solution.
Comment by refset 5 hours ago
Comment by tullie 3 hours ago
Someone shared with me these the other day and we're inspired to add more remote LLM calls directly into ShapedQL now: https://github.com/asg017/sqlite-rembed https://github.com/asg017/sqlite-lembed
Comment by alexpadula 5 hours ago
Curious what relational database do you @refset use? Is the code open source? Is the engine from scratch? What general dialect does it support?
Cheers!
Comment by refset 4 hours ago
XTDB is perhaps not directly relevant to the topic at hand, but I am a firm believer that ML workflows can benefit from robust temporal modelling.
Comment by hrimfaxi 5 hours ago
> We’ll whenever feasible ask for your consent before using your Personal information for a purpose that isn’t covered in this Privacy Policy.
Comment by tullie 4 hours ago
Thanks for the feedback on the privacy policy, let me see if we can get that changed. For what it's worth we don't share personal information with anyone, this is likely just overly defensive legal writing on our part.
Comment by thorax 6 hours ago
Regarding the rest, it seems like a reasonable approach at first tinker.
Comment by tullie 3 hours ago
Comment by pickleballcourt 5 hours ago
Comment by tullie 4 hours ago
E.g imagine trying to build a feed with pgvector, you need to build all of the vector encoding logic for your catalog, then you need to build user embeddings, the models to represent that and then have a service that at query time encodes user embeddings from interactions does a lookup on pgvector and returns nearest neighbor items. Then you also need to think about fine-tuning reranking models, diversity algorithms and the cold-start problem of serving new items to users. Shaped and ShapedQL bundles all of that logic into a service that does it all as one in a low-latency and fault-tolerant way.
Comment by mritchie712 5 hours ago
> This replaces about 500 lines of standard Python
isn't really a selling point when an LLM can do it in a few seconds. I think you'd be better off pitching simpler infra and better performance (if that's true).
i.e. why should I use this instead of turbopuffer? The answer of "write a little less code" is not compelling.
Comment by tullie 3 hours ago
To put it in the perspective of LLMs, LLMs perform much better when you can paste the full context in a short context window. I've personally found it just doesn't miss things as much so the number of tokens does matter even if it's less important than for a human.
For the turbopuffer comment, just btw, we're not a vector store necessarily we're more like a vector store + feature store + machine learning inference service. So we do the encoding on our side, and bundle the model fine-tuning etc...
Comment by airstrike 4 hours ago
> isn't really a selling point when an LLM can do it in a few seconds.
this is not my area of expertise, but doesn't that still assume the LLM will get it done right?
Comment by verdverm 4 hours ago
This idea that it no longer matters because Ai can spam out code is a concerning trend.
Comment by jiwidi 5 hours ago