PulseScribe – Open-source voice-to-text for macOS with local AI

Posted by fabszilla 8 hours ago

Counter2Comment2OpenOriginal

Comments

Comment by fabszilla 8 hours ago

I built PulseScribe because I wanted Wispr Flow without the subscription.

Key features: - ~300ms latency with Deepgram WebSocket streaming - Local mode with MLX/Lightning on Apple Silicon (M1/M2/M3) - Context-aware LLM refinement (detects email, chat, code) - Just fixed it for macOS 26 (Tahoe) this week

Technical stack: Python, PyObjC for native macOS UI, MLX for local inference.

The tricky part was getting <500ms end-to-end latency. We stream audio over WebSocket while recording, so results appear immediately when you stop.

GitHub: https://github.com/KLIEBHAN/pulsescribe Website: https://pulsescribe.me

Comment by StackTopherFlow 8 hours ago

Impressive latency numbers! I’ll definitely try replacing super whisper with this.