Show HN: Open-source customizable AI voice dictation built on Pipecat

Posted by kstonekuan 22 hours ago

Counter21Comment10OpenOriginal

Tambourine is an open source, fully customizable voice dictation system that lets you control STT/ASR, LLM formatting, and prompts for inserting clean text into any app.

I have been building this on the side for a few weeks. What motivated it was wanting a customizable version of Wispr Flow where I could fully control the models, formatting, and behavior of the system, rather than relying on a black box.

Tambourine is built directly on top of Pipecat and relies on its modular voice agent framework. The back end is a local Python server that uses Pipecat to stitch together STT and LLM models into a single pipeline. This modularity is what makes it easy to swap providers, experiment with different setups, and maintain fine-grained control over the voice AI.

I shared an early version with friends and recently presented it at my local Claude Code meetup. The response was overwhelmingly positive, and I was encouraged to share it more widely.

The desktop app is built with Tauri. The front end is written in TypeScript, while the Tauri layer uses Rust to handle low level system integration. This enables the registration of global hotkeys, management of audio devices, and reliable text input at the cursor on both Windows and macOS.

At a high level, Tambourine gives you a universal voice interface across your OS. You press a global hotkey, speak, and formatted text is typed directly at your cursor. It works across emails, documents, chat apps, code editors, and terminals.

Under the hood, audio is streamed from the TypeScript front end to the Python server via WebRTC. The server runs real-time transcription with a configurable STT provider, then passes the transcript through an LLM that removes filler words, adds punctuation, and applies custom formatting rules and a personal dictionary. STT and LLM providers, as well as prompts, can be switched without restarting the app.

The project is still under active development. I am working through edge cases and refining the UX, and there will likely be breaking changes, but most core functionality already works well and has become part of my daily workflow.

I would really appreciate feedback, especially from anyone interested in the future of voice as an interface.

Comments

Comment by bryanwhl 2 hours ago

Does this work on macos?

Comment by kstonekuan 2 hours ago

Yup, the desktop app is built with Tauri, which is cross-platform compatible, and I have personally tested it on macos and windows

Comment by grayhatter 17 hours ago

I don't think I'd call anything that only works with a proprietary Internet hosted LLM (one you need an account to use) open-source.

This is less voice dictation software, and much more a shim to [popular LLM provider]

Comment by kstonekuan 13 hours ago

Hey, sorry if the examples given were not robust, but because this is built on Pipecat, you can actually very easily swap to a local LLM if you prefer that, and the project is already set up to allow you to do that via environment variables.

The integration to set up the WebRTC connection, get the voice dictation working seamlessly from anywhere, and input into any app took a long time to build out, and that's why I want to share this open source.

Comment by popalchemist 13 hours ago

The critiques about local inference are valid, if you're billing this as an open source alternative to existing cloud based solutions.

Comment by kstonekuan 13 hours ago

Thanks for the feedback, probably should have been clearer in my original post and in the README as well. Local inference is already supported via Pipecat, you can use ollama or any custom OpenAI endpoint. Local STT is also supported via whisper, which pipecat will download and manage for you.

Comment by popalchemist 5 hours ago

Rad. put that front and center on the readme.

Comment by kstonekuan 2 hours ago

Updated!

Comment by lrvick 17 hours ago

Is there a way to do this with a local LLM, without any internet access needed?

Comment by kstonekuan 13 hours ago

Yes, Pipecat already supports that natively, so this can be done easily with ollama. I have also built that into the environment variables with `OLLAMA_BASE_URL`.

About ollama in pipecat: https://docs.pipecat.ai/server/services/llm/ollama

Also, check out any provider they support, and it can be easily onboarded in a few lines of code.