Abhiram
05/26/2025, 5:16 AMVapi Support Bot
05/26/2025, 5:17 AMVapi Support Bot
05/26/2025, 5:18 AMAbhiram
05/26/2025, 5:29 AMAbhiram
05/26/2025, 5:30 AMVapi Support Bot
05/26/2025, 5:31 AMVapi Support Bot
05/26/2025, 5:31 AMVapi Support Bot
05/26/2025, 5:31 AMVapi Support Bot
05/26/2025, 5:31 AMAbhiram
05/26/2025, 7:45 AMAbhiram
05/26/2025, 7:45 AMVapi Support Bot
05/26/2025, 7:46 AMlistenUrl
WebSocket for each call and processing the audio stream with your own transcription service.
Here's a quick example using Node.js and Deepgram:
js
const WebSocket = require('ws');
const ws = new WebSocket('wss://your-listen-url');
ws.on('message', (data, isBinary) => {
if (isBinary) {
// Forward PCM audio to your STT provider for live transcription
}
});
Set up your own transcriber to process the audio and emit live transcript text as needed.
[See Vapi docs for details](https://docs.vapi.ai/calls/call-features#call-listen-feature)
Sources:
- [Call Features – Call Listen Feature](https://docs.vapi.ai/calls/call-features#call-listen-feature)
- [Custom Transcriber Guide](https://docs.vapi.ai/customization/custom-transcriber)Vapi Support Bot
05/26/2025, 7:46 AMCustomTranscriber
option in your assistant configuration (e.g., assistant.transcriber
) and provide your server URL. Vapi will then send real-time audio to your server, allowing you to generate and access live transcripts during the call.
Example:
json
{
"assistant": {
"transcriber": {
"type": "CustomTranscriber",
"server": { "url": "https://your-transcriber-server.com" }
}
}
}
For more details, see the [Vapi changelog](https://docs.vapi.ai/changelog/2024/10/19).
Source:
- [Vapi Changelog – October 19, 2024](https://docs.vapi.ai/changelog/2024/10/19)Vapi Support Bot
05/26/2025, 7:46 AMKings_big💫
05/26/2025, 8:31 AMVapi Support Bot
05/26/2025, 8:32 AMVapi Support Bot
05/26/2025, 8:32 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
Test the integration to see if emotion fields are present in the transcript data you receive.
[See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)
Sources:
- [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works)
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)Abhiram
05/26/2025, 9:23 AMVapi Support Bot
05/26/2025, 9:24 AMVapi Support Bot
05/26/2025, 9:24 AMAbhiram
05/26/2025, 9:26 AMAbhiram
05/26/2025, 9:27 AMAbhiram
05/26/2025, 9:27 AMVapi Support Bot
05/26/2025, 9:28 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
After setup, test your integration to see if emotion fields are included in the transcript output, as this is not guaranteed by Vapi's documentation.
[See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)Vapi Support Bot
05/26/2025, 9:28 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
After setup, test your integration to see if emotion fields are included in the transcript data you receive.
[See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)Vapi Support Bot
05/26/2025, 9:28 AMVapi Support Bot
05/26/2025, 9:29 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
Test your integration to see if emotion fields appear in the transcript data after the call ends.
See: [Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)Vapi Support Bot
05/26/2025, 9:29 AMjson
// Standard call analysis is available post-call, regardless of transcriber
{
"call.analysis": {
"summary": "...",
"structuredData": { ... },
"successEvaluation": { ... }
}
}
[See Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
Sources:
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)Vapi Support Bot
05/26/2025, 9:29 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
Call analysis data (summary, structured data, success evaluation) is available after the call ends, but emotion fields are not documented as part of the output.
[See Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
Sources:
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)Abhiram
05/26/2025, 9:59 AMVapi Support Bot
05/26/2025, 10:01 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber", // or "gladia"
"apiKey": "YOUR_API_KEY"
}
}
}
After setup, test your integration to see if emotion fields appear in the transcript data you receive, as this is not officially documented by Vapi.
[Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)Vapi Support Bot
05/26/2025, 10:02 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber",
"apiKey": "YOUR_API_KEY"
}
}
}
After the call ends, you can access standard call analysis data, but emotion fields in transcripts are not officially guaranteed—test your integration to verify their presence.
See: [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)Abhiram
05/26/2025, 10:07 AMAbhiram
05/26/2025, 10:07 AMVapi Support Bot
05/26/2025, 10:07 AMVapi Support Bot
05/26/2025, 10:08 AMVapi Support Bot
05/26/2025, 10:08 AMVapi Support Bot
05/26/2025, 10:08 AMjson
{
"assistant": {
"transcriber": {
"provider": "talkscriber",
"apiKey": "YOUR_API_KEY"
}
}
}
For now, you may want to contact Vapi support directly or monitor the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) for updates on emotion data availability.
Sources:
- [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber)
- [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)Kings_big💫
05/26/2025, 10:18 AMVapi Support Bot
05/26/2025, 10:19 AMVapi Support Bot
05/26/2025, 10:19 AMAbhiram
05/26/2025, 3:46 PMVapi Support Bot
05/26/2025, 3:46 PMPraveen Nair
05/26/2025, 11:17 PMAbhiram
05/27/2025, 5:10 AMAbhiram
05/27/2025, 5:10 AMAbhiram
05/27/2025, 5:11 AMVapi Support Bot
05/27/2025, 5:11 AMVapi Support Bot
05/27/2025, 5:12 AMVapi Support Bot
05/27/2025, 5:12 AMmarkdown
[Response Handling]
- If the user sounds frustrated or upset, respond with extra empathy and offer to help resolve their issue quickly.
- If the user sounds happy or satisfied, thank them warmly and ask if there's anything else you can assist with.
Add this logic to your assistant's prompt to simulate emotion-aware responses, even if emotion data is not directly available in the transcript.
For more details, see the [Vapi Prompting Guide](https://docs.vapi.ai/prompting-guide).
Source:
- [Vapi Prompting Guide](https://docs.vapi.ai/prompting-guide)Vapi Support Bot
05/27/2025, 5:12 AMPraveen Nair
05/27/2025, 9:53 PM