Show HN: Ctx – a /resume that works across Claude Code and Codex
Posted by dchu17 4 days ago
ctx is a local SQLite-backed skill for Claude Code and Codex that stores context as a persistent workstream that can be continued across agent sessions. Each workstream can contain multiple sessions, notes, decisions, todos, and resume packs. It essentially functions as a /resume that can work across coding agents.
Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117
I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.
It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.
A few things it does:
- Resume an existing workstream from either tool
- Pull existing context into a new workstream
- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest
- Search for relevant workstreams
- Branch from existing context to explore different tasks in parallel
It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.
Comments
Comment by realdimas 3 days ago
> Changing thinking mode mid-conversation will increase latency and may reduce quality. For best results, set this at the start of a session.
Neither OpenAI nor Anthropic exposes raw thinking tokens anymore.
Claude Code redacts thinking by default (you can opt in to get Haiku-produced summaries at best), and OpenAI returns encrypted reasoning items.
Either way, first-party CLIs hold opaque thinking blobs that can't be manipulated or ported between providers without dropping them. So cross-agent resume carries an inherent performance penalty: you keep the (visible) transcript but lose the reasoning.
Comment by LeoPanthera 3 days ago
Comment by meowface 3 days ago
Comment by theowaway213456 3 days ago
Comment by shmoogy 2 days ago
I know it's wasteful but often I've got a surplus of tokens and not enough of my time - so it's a trade off I've been fine with.
Comment by daemonologist 3 days ago
Comment by dgunay 3 days ago
Comment by errolabadzhiev 2 days ago
Comment by giancarlostoro 3 days ago
Comment by StanAngeloff 3 days ago
Comment by arcanemachiner 3 days ago
Comment by giancarlostoro 3 days ago
Comment by vorticalbox 3 days ago
Comment by arcanemachiner 2 days ago
Comment by nextaccountic 3 days ago
But yeah, after the price hikes, it's inevitable that people will run open source harnesses
Comment by ghm2180 3 days ago
How does ctx "normalize" things across providers in the context window ( e.g. tool/mcp calls, sub-agent results)?
Comment by saadn92 3 days ago
Comment by buremba 3 days ago
Comment by dchu17 3 days ago
The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).
What do you mean by prompt caching?
Comment by Wowfunhappy 3 days ago
Obviously, your tool does not provide this. But I think GP is undervaluing the UX advantages of having your conversation history.
Comment by buremba 3 days ago
Comment by ycombinatornews 3 days ago
Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.
Comment by t0mas88 3 days ago
Comment by testiam 1 day ago
Comment by phoenixranger 3 days ago
Comment by ramon156 3 days ago
Comment by flowdesktech 2 days ago
Comment by tuo-lei 3 days ago