Deepgram confidence exposed
# support
r
Deepgram exposes a words list with confidence levels on partial transcript creation. Is this information exposed anywhere via a conversation-update event or something like that? Sample deepgram response -> { "type": "Results", "channel_index": [ 0, 1 ], "duration": 1.98, "start": 5.99, "is_final": true, "speech_final": true, "channel": { "alternatives": [ { "transcript": "Tell me more about this.", "confidence": 0.99964225, "words": [ { "word": "tell", "start": 6.0699997, "end": 6.3499994, "confidence": 0.99782443, "punctuated_word": "Tell" }, { "word": "me", "start": 6.3499994, "end": 6.6299996, "confidence": 0.9998324, "punctuated_word": "me" }, { "word": "more", "start": 6.6299996, "end": 6.79, "confidence": 0.9995466, "punctuated_word": "more" }, { "word": "about", "start": 6.79, "end": 7.0299997, "confidence": 0.99984455, "punctuated_word": "about" }, { "word": "this", "start": 7.0299997, "end": 7.2699995, "confidence": 0.99964225, "punctuated_word": "this" } ] } ] }, "metadata": { "request_id": "52cc0efe-fa77-4aa7-b79c-0dda09de2f14", "model_info": { "name": "2-general-nova", "version": "2024-01-18.26916", "arch": "nova-2" }, "model_uuid": "c0d1a568-ce81-4fea-97e7-bd45cb1fdf3c" }, "from_finalize": false }
v
By default, Vapi's custom transcriber integration with Deepgram does not expose the per-word confidence list in conversation-update or transcript events—it only sends the transcript text and channel. If you want to access the
words
array (with confidence levels), you would need to modify your custom transcriber server to include this data in the response sent back to Vapi or your own backend. Example:
Copy code
js
// In your custom transcriber, send the full Deepgram alternative:
ws.send(JSON.stringify({
  type: "transcriber-response",
  transcription: text,
  channel,
  words: deepgramAlternative.words // <-- add this field
}));
This way, you can capture and use the word-level confidence data as needed. Source: - [Custom transcriber documentation](https://docs.vapi.ai/customization/custom-transcriber)
r
@Vapi Support Bot will the custom transcribe create more latency?
v
Using a custom transcriber (like Deepgram via your own server) can introduce a small amount of extra latency compared to built-in providers, since audio must be streamed to your server and then to Deepgram before returning the transcript. However, as long as your server and network are fast and support real-time streaming, this added latency should be minimal and not significantly impact responsiveness. Tip: Deploy your custom transcriber close to Vapi's infrastructure and Deepgram for best results. [See custom transcriber notes](https://docs.vapi.ai/customization/custom-transcriber) Source: - [Custom transcriber documentation](https://docs.vapi.ai/customization/custom-transcriber)
r
ok
v
Message marked as helpful by @rowdy! 🎉