August 8, 2025

TL;DR
Run multiple independent audio input streams and get multiple real-time transcript output streams without additional memory consumption. Multi-stream is necessary when transcribing system audio, e.g. Google Meet audio from remote participants, and system microphone, e.g. Google Meet audio for local participants, in the same meeting.
Argmax Local Server's WebSocket API is compatible with that of Deepgram. The demo video above demonstrates an Electron app using Argmax Local Server with ZERO changes to app code when migrating from Deepgram. This is possible by simply switching the inference host URL endpoint from remote host (api.deepgram.com) to local host (localhost).
Before Argmax Local Server hit the market, many apps like Granola tried on-device inference and decided against it because CPU and GPU resource contention with other apps led to slowdowns and user complaints.

In our mission to make on-device the obvious architectural choice for audio model inference infrastructure, we solved this problem. See 1:31 in the video above for details on how.
Real-time speaker diarization will be added to Argmax Local Server (and SDK) in early 2026, get on the waitlist to be the first to hear when it ships! For now, we recommend using SpeakerKit via Argmax SDK for use with prerecorded audio.