Tuans2
04/06/2025, 7:34 AMVapi Support Bot
04/06/2025, 7:35 AMserverMessages
to include "transcript":
json
{
"serverMessages": [
"transcript",
"end-of-call-report"
]
}
According to the [Vapi documentation](https://docs.vapi.ai/api-reference/calls/create), this configuration will send real-time transcriptions to your server via webhooks.
2. Receive transcript data via webhooks
Your server will receive real-time transcripts in this format:
json
{
"message": {
"type": "transcript",
"transcript": "User's speech text",
"role": "user",
"transcriptType": "final"
}
}
As mentioned in [Vapi's Discord channel](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1341440757189578782), this data includes the speaker's role and whether it's a partial or final transcript.
3. Use ClientMessageTranscript event (alternative approach)
According to [another Discord thread](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1260258706667343892), you can also use the ClientMessageTranscript
event and check if transcriptType
is "final" to get the final transcript of both user and assistant. This can help filter out repetitive messages.
4. Choose a transcription provider
Vapi supports various transcription providers with real-time capabilities:
- AssemblyAI: Offers sub-second latency for speech-to-text conversion
- Talkscriber: Provides instantaneous conversion of speech to textVapi Support Bot
04/06/2025, 7:35 AMKings Big
04/06/2025, 8:27 AM