vapi voice emotion detection
# support
v
Vapi's emotion detection model analyzes the user's voice in real time and uses this information to adjust the assistant's responses. However, the documentation does not confirm that the detected emotion is directly available as a field in the transcript or API response for you to access. If you need to retrieve the user's emotion explicitly, you may need to contact Vapi support or check for future updates to the API documentation for this feature. Source: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works)
Vapi's emotion detection analyzes the user's speech in real time and uses this information to adjust the assistant's responses. However, based on the documentation, there is no indication that the detected emotion is directly exposed in the transcript or as a separate API field for you to access. The emotion data appears to be used internally by Vapi to influence conversation flow, not for external consumption by developers at this time. [See documentation](https://docs.vapi.ai/how-vapi-works) Source: - [How Vapi works](https://docs.vapi.ai/how-vapi-works)
a
Is there any chance of getting those out ?
or else can you provide the feature?
v
Currently, Vapi does not expose the user's detected emotion as a field in transcripts or API responses for developers. The emotion data is used internally to adjust assistant behavior. If you need access to this feature, the best next step is to [contact Vapi support](support@vapi.ai) and request it or ask about future availability. [See documentation](https://docs.vapi.ai/how-vapi-works) Source: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works)
Currently, **Vapi does not expose the user's detected emotion as a field in the transcript or API response**—the emotion data is used internally to adjust assistant behavior. If you need access to this information, the best next step is to contact Vapi support and request this feature or ask about their roadmap for exposing emotion data to developers. Contact support: - [Vapi Support](https://docs.vapi.ai/support) For updates, you can also monitor the [Vapi documentation](https://docs.vapi.ai/how-vapi-works) for any changes regarding emotion detection fields. Sources: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works) - [Vapi Support](https://docs.vapi.ai/support)
Currently, **Vapi does not expose user emotion data directly in transcripts or API responses**—it is used internally to adjust assistant behavior. If you would like this feature, you can submit a feature request on the [Vapi public roadmap](https://roadmap.vapi.ai/feature-requests) or contact [Vapi support](support@vapi.ai) to express your interest. This helps the team prioritize new features based on user demand. Sources: - [How Vapi works – Emotion Detection](https://docs.vapi.ai/how-vapi-works) - [Vapi Support & Feature Requests](https://docs.vapi.ai/support) - [Vapi Public Roadmap](https://roadmap.vapi.ai/feature-requests)
Currently, **Vapi does not expose user emotion data in transcripts or API responses**—the emotion detection is used internally to adjust assistant behavior. If you would like this feature, you can submit a request on Vapi's [public roadmap board](https://roadmap.vapi.ai/feature-requests), where user feedback helps shape future updates. This is the best way to advocate for new features or enhancements. Sources: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works) - [Vapi Support – Feature Requests and Bug Reports](https://docs.vapi.ai/support#feature-requests-and-bug-reports)
a
is there any chance to get the live transcript
not after call
v
Yes, you can get live transcripts from Vapi calls by connecting to the
listenUrl
WebSocket for each call and processing the audio stream with your own transcription service.
Here's a quick example using Node.js and Deepgram:
Copy code
js
const WebSocket = require('ws');
const ws = new WebSocket('wss://your-listen-url');
ws.on('message', (data, isBinary) => {
  if (isBinary) {
    // Forward PCM audio to your STT provider for live transcription
  }
});
Set up your own transcriber to process the audio and emit live transcript text as needed. [See Vapi docs for details](https://docs.vapi.ai/calls/call-features#call-listen-feature) Sources: - [Call Features – Call Listen Feature](https://docs.vapi.ai/calls/call-features#call-listen-feature) - [Custom Transcriber Guide](https://docs.vapi.ai/customization/custom-transcriber)
Yes, you can get a live transcript by integrating your own custom transcription service with Vapi. To do this, set the
CustomTranscriber
option in your assistant configuration (e.g.,
assistant.transcriber
) and provide your server URL. Vapi will then send real-time audio to your server, allowing you to generate and access live transcripts during the call. Example:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "type": "CustomTranscriber",
      "server": { "url": "https://your-transcriber-server.com" }
    }
  }
}
For more details, see the [Vapi changelog](https://docs.vapi.ai/changelog/2024/10/19). Source: - [Vapi Changelog – October 19, 2024](https://docs.vapi.ai/changelog/2024/10/19)
Currently, Vapi does not document a way to access live transcripts during an active call—transcripts are only available after the call ends via the dashboard or API. If you need real-time transcript access, the best next step is to submit a feature request on the [Vapi public roadmap](https://roadmap.vapi.ai/feature-requests) or contact [Vapi support](https://docs.vapi.ai/support) to express your interest in this capability. Sources: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) - [Vapi Support](https://docs.vapi.ai/support) - [Vapi Public Roadmap](https://roadmap.vapi.ai/feature-requests)
k
Yes, Vapi AI can detect user emotions through transcripts by using transcribers like Talkscriber or Gladia, just ensure they’re properly configured, and you can access the detected emotions in the call analysis data.
v
Vapi does not currently provide user emotion data as a field in transcripts or API responses. The emotion detection feature is used internally to adjust assistant behavior, but this information isn't accessible to developers through the transcript or API. If you need live transcripts during a call, you can set up your own custom transcription service by connecting to the call's audio stream, which allows you to generate and access live transcripts in real time. While some users mention that certain transcribers like Talkscriber or Gladia might enable emotion detection through transcripts, there is no official confirmation that this data is accessible within Vapi's call analysis. Sources: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works) - [Call Features – Call Listen Feature](https://docs.vapi.ai/calls/call-features#call-listen-feature) - [Custom Transcriber Guide](https://docs.vapi.ai/customization/custom-transcriber) - [Vapi Changelog – October 19, 2024](https://docs.vapi.ai/changelog/2024/10/19)
To access live transcripts (and potentially emotion data) during a Vapi call, configure your assistant to use a supported transcriber like Talkscriber or Gladia, which may include emotion fields in their transcript output. Here's an example configuration for a custom transcriber:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
Test the integration to see if emotion fields are present in the transcript data you receive. [See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia) Sources: - [How Vapi Works – Emotion Detection](https://docs.vapi.ai/how-vapi-works) - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
a
@Kings_big💫 where will be the call analysis data? at the end of the call?
v
Call analysis data in Vapi is available after the call ends. You can access the analysis (summary, structured data, and success evaluation) in the call instance dashboard or retrieve it via the API: - Dashboard: View the results attached to the call record. - API: Fetch analysis data programmatically after call completion. Live call analysis is not available; results are only generated post-call. [See Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) Source: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
**Call analysis data in Vapi is available after the call ends**—you can view it in the call instance dashboard or retrieve it via the API. The analysis includes a summary, structured data, and a success evaluation, but does not include user emotion data. For more details, see the [Vapi Call Analysis documentation](https://docs.vapi.ai/assistants/call-analysis). Source: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
a
should i need to configure talkscriber or gladia as the transcriber?
@Kings_big💫 can you clarify me this ?
and i will get them at the end of the call?
v
Yes, you should configure either Talkscriber or Gladia as your transcriber to access live transcripts during Vapi calls. Use this configuration example:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
After setup, test your integration to see if emotion fields are included in the transcript output, as this is not guaranteed by Vapi's documentation. [See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia) Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
Yes, you should configure either Talkscriber or Gladia as your transcriber to access live transcripts and potentially emotion detection features. Use this configuration in your assistant setup:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
After setup, test your integration to see if emotion fields are included in the transcript data you receive. [See Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [See Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia) Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
To access live transcripts (and potentially emotion data) in Vapi, you should configure your assistant to use either Talkscriber or Gladia as the transcriber. Both providers support emotion detection, but you will need to test your integration to confirm if emotion fields appear in the transcript output you receive ([Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber), [Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia)). Clarifying question: Are you looking to access emotion data in real-time during the call, or is post-call analysis sufficient for your use case? Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
Yes, you should configure Talkscriber or Gladia as your transcriber if you want to access advanced transcript features (like emotion detection) in Vapi. Use this configuration example:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
Test your integration to see if emotion fields appear in the transcript data after the call ends. See: [Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia) Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
You do not need to configure Talkscriber or Gladia as your transcriber to receive standard Vapi call analysis data (summary, structured data, and success evaluation)—this data is always available after the call ends via the dashboard or API. If you want to experiment with emotion detection in transcripts, you may try configuring Talkscriber or Gladia, but there is no official documentation confirming that emotion data will appear in Vapi's call analysis output. Example:
Copy code
json
// Standard call analysis is available post-call, regardless of transcriber
{
  "call.analysis": {
    "summary": "...",
    "structuredData": { ... },
    "successEvaluation": { ... }
  }
}
[See Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) Sources: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
Yes, you need to configure Talkscriber or Gladia as your transcriber if you want to use their features, but Vapi's call analysis does not officially include user emotion data—even with these providers. To set a custom transcriber, use a configuration like:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
Call analysis data (summary, structured data, success evaluation) is available after the call ends, but emotion fields are not documented as part of the output. [See Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) Sources: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia)
a
@Kings_big💫 i think emotional detection is n ot working well
v
To enable emotion detection features, configure your assistant to use either Talkscriber or Gladia as the transcriber. Use this example in your setup:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber", // or "gladia"
      "apiKey": "YOUR_API_KEY"
    }
  }
}
After setup, test your integration to see if emotion fields appear in the transcript data you receive, as this is not officially documented by Vapi. [Talkscriber docs](https://docs.vapi.ai/providers/transcriber/talkscriber) | [Gladia docs](https://docs.vapi.ai/providers/transcriber/gladia) Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
To configure Talkscriber as your transcriber in Vapi, use the following setup in your assistant configuration:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber",
      "apiKey": "YOUR_API_KEY"
    }
  }
}
After the call ends, you can access standard call analysis data, but emotion fields in transcripts are not officially guaranteed—test your integration to verify their presence. See: [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
a
even usin gtalkscriber iam unable to get the results of emotiojn
@Kings_big💫
v
It looks like you've set up Talkscriber as your transcriber, but you're still not seeing emotion detection results in your transcripts. Based on the information available, emotion fields are not officially guaranteed to appear in Vapi transcripts, even when using Talkscriber or Gladia. While these providers may support emotion detection, Vapi's documentation does not confirm that this data will always be included in the output. This means that not receiving emotion results is consistent with the current platform limitations. If you have any other questions or need help with a different aspect of your setup, feel free to ask! Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
Currently, Vapi does not officially guarantee that emotion detection results from Talkscriber will appear in your transcript or call analysis data—even if Talkscriber is configured as the transcriber. If you have already set up Talkscriber and are not seeing emotion fields, this is expected based on the available documentation. For further troubleshooting or to request this feature, consider reaching out to [Vapi Support](https://docs.vapi.ai/support) or monitoring the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) for updates. Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
Thank you for sharing your experience. While Talkscriber advertises emotion detection, Vapi's documentation does not guarantee that emotion fields will appear in transcripts, even when using Talkscriber as the transcriber. This means your current results (no emotion data) are consistent with the official documentation. Clarifying question: Are you seeing any additional metadata or fields in your transcript output, or is it strictly plain text with no extra information at all? This will help determine if any emotion-related data is being passed through at all. Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Gladia documentation](https://docs.vapi.ai/providers/transcriber/gladia) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
Currently, Vapi does not officially guarantee that emotion detection results from Talkscriber will appear in your transcript or call analysis output—even if Talkscriber is configured as your transcriber. If you have already set up Talkscriber as shown below and still do not see emotion fields, this is expected based on the documentation:
Copy code
json
{
  "assistant": {
    "transcriber": {
      "provider": "talkscriber",
      "apiKey": "YOUR_API_KEY"
    }
  }
}
For now, you may want to contact Vapi support directly or monitor the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) for updates on emotion data availability. Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
k
Can you share your call id
v
Thank you for your feedback. Based on the current documentation, emotion detection results are not officially guaranteed to appear in Vapi transcripts, even when using Talkscriber as your transcriber. To help us understand your situation better: Are you seeing any additional metadata or fields in your transcript output, or is it strictly plain text with no extra information at all? This will help determine if any emotion-related data is being passed through at all. For further updates, you can also monitor the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) or reach out to [Vapi Support](https://docs.vapi.ai/support). Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis) - [Vapi Support](https://docs.vapi.ai/support)
You're correct—emotion detection results are not currently guaranteed to appear in Vapi transcripts, even when using Talkscriber as your transcriber. If you're not seeing any emotion fields in your output, that's consistent with the current platform behavior. If you have any more details about your transcript output or other questions, feel free to share! Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
a
this is the call id 0020219b-9e60-4e28-9193-d7f8d92983d0
v
You're correct—emotion detection results are not currently guaranteed to appear in Vapi transcripts, even when using Talkscriber as your transcriber. If you're not seeing any emotion fields in your output, that's consistent with how the platform currently works. If you have any more details about your transcript output or other questions, feel free to share!
p
You can do it via prompting. Like, simply mention the assistant how to behave when the tone is X or Y
a
yes
i need the emotion results.
and iam using talkscriber
v
You're correct—emotion detection results are not currently included in Vapi transcripts or call analysis outputs, even when using Talkscriber as your transcriber. If you're not seeing any emotion fields in your output, that's consistent with how the platform currently works. If you want the assistant to respond differently based on tone, you can guide its behavior through prompting, such as specifying how it should react to different user tones.
Currently, Vapi does not expose emotion detection results from Talkscriber in transcripts or API responses—even if Talkscriber is configured as your transcriber. If you need explicit emotion results, the best next step is to monitor the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) or contact [Vapi Support](https://docs.vapi.ai/support) for updates on this feature. No workaround is documented at this time for extracting emotion fields directly from Vapi transcripts. Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Support](https://docs.vapi.ai/support)
Here's a simple example of how you can use prompting to guide your assistant's behavior based on user tone:
Copy code
markdown
[Response Handling]
- If the user sounds frustrated or upset, respond with extra empathy and offer to help resolve their issue quickly.
- If the user sounds happy or satisfied, thank them warmly and ask if there's anything else you can assist with.
Add this logic to your assistant's prompt to simulate emotion-aware responses, even if emotion data is not directly available in the transcript. For more details, see the [Vapi Prompting Guide](https://docs.vapi.ai/prompting-guide). Source: - [Vapi Prompting Guide](https://docs.vapi.ai/prompting-guide)
Currently, Vapi does not provide emotion detection results from Talkscriber in transcripts or API outputs—even if you have Talkscriber configured. If you need emotion data, the best next step is to monitor the [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) for future updates or contact Vapi support for feature requests. No code or configuration change will expose emotion results at this time. Sources: - [Talkscriber documentation](https://docs.vapi.ai/providers/transcriber/talkscriber) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis)
p
For emotion results, write a prompt for the summary describing how the person expressed their emotion during the call and what emotions the customer was having during the call.