Show HN: Open-source customizable AI voice dictation built on Pipecat
Posted by kstonekuan 22 hours ago
Tambourine is an open source, fully customizable voice dictation system that lets you control STT/ASR, LLM formatting, and prompts for inserting clean text into any app.
I have been building this on the side for a few weeks. What motivated it was wanting a customizable version of Wispr Flow where I could fully control the models, formatting, and behavior of the system, rather than relying on a black box.
Tambourine is built directly on top of Pipecat and relies on its modular voice agent framework. The back end is a local Python server that uses Pipecat to stitch together STT and LLM models into a single pipeline. This modularity is what makes it easy to swap providers, experiment with different setups, and maintain fine-grained control over the voice AI.
I shared an early version with friends and recently presented it at my local Claude Code meetup. The response was overwhelmingly positive, and I was encouraged to share it more widely.
The desktop app is built with Tauri. The front end is written in TypeScript, while the Tauri layer uses Rust to handle low level system integration. This enables the registration of global hotkeys, management of audio devices, and reliable text input at the cursor on both Windows and macOS.
At a high level, Tambourine gives you a universal voice interface across your OS. You press a global hotkey, speak, and formatted text is typed directly at your cursor. It works across emails, documents, chat apps, code editors, and terminals.
Under the hood, audio is streamed from the TypeScript front end to the Python server via WebRTC. The server runs real-time transcription with a configurable STT provider, then passes the transcript through an LLM that removes filler words, adds punctuation, and applies custom formatting rules and a personal dictionary. STT and LLM providers, as well as prompts, can be switched without restarting the app.
The project is still under active development. I am working through edge cases and refining the UX, and there will likely be breaking changes, but most core functionality already works well and has become part of my daily workflow.
I would really appreciate feedback, especially from anyone interested in the future of voice as an interface.
Comments
Comment by bryanwhl 2 hours ago
Comment by kstonekuan 2 hours ago
Comment by grayhatter 17 hours ago
This is less voice dictation software, and much more a shim to [popular LLM provider]
Comment by kstonekuan 13 hours ago
The integration to set up the WebRTC connection, get the voice dictation working seamlessly from anywhere, and input into any app took a long time to build out, and that's why I want to share this open source.
Comment by popalchemist 13 hours ago
Comment by kstonekuan 13 hours ago
Comment by popalchemist 5 hours ago
Comment by kstonekuan 2 hours ago
Comment by lrvick 17 hours ago
Comment by kstonekuan 13 hours ago
About ollama in pipecat: https://docs.pipecat.ai/server/services/llm/ollama
Also, check out any provider they support, and it can be easily onboarded in a few lines of code.