Where can I find the content an assistant has gene...
# support
h
For example, if an assistant is interrupted while speaking, where can I find what it would have said if it hadn't been interrupted?
@User please answer
v
You can listen for the
message
event in the Vapi Web SDK to receive assistant messages (such as transcripts or function calls) as they are generated. While this may include the text the assistant is about to speak, it is not guaranteed to always provide the entire message before it is spoken or interrupted. Here’s a code snippet:
Copy code
js
vapi.on("message", (message) => {
  console.log("Assistant generated message:", message);
});
This lets you capture assistant outputs in real time, but retrieving the exact unspoken remainder after an interruption is not explicitly documented. Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Assistant hooks: handling interruptions](https://docs.vapi.ai/assistants/assistant-hooks)
v
You can find the LLM output in the in call logs.
h
i want it before it speaks it during the call
transcript includes what it has spoken, i want what it will speak beforehand that is response generated by llm which it speaks
v
To capture what the assistant is about to say before it speaks during a call, listen for the
message
event in the Vapi Web SDK:
Copy code
js
vapi.on("message", (message) => {
  console.log("Assistant generated message:", message);
});
This lets you access the assistant's output in real time, but it may not always provide the full message before it is spoken or interrupted. [See documentation](https://docs.vapi.ai/sdk/web#message) Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Assistant hooks: handling interruptions](https://docs.vapi.ai/assistants/assistant-hooks)
h
have a look
v
You can listen for the
message
event in the Vapi Web SDK to capture assistant outputs as they are generated, which may include what the assistant is about to speak.
However, it is not guaranteed that you will always receive the entire message before it is spoken or interrupted, and retrieving the exact unspoken remainder after an interruption is not explicitly documented ([Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message)). To help further, could you clarify if you are using the Web SDK, or are you looking for a solution in a different environment (e.g., server-side or mobile)? Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Assistant hooks: handling interruptions](https://docs.vapi.ai/assistants/assistant-hooks)
To access what the assistant is about to say before it speaks, listen for the
message
event in the Vapi Web SDK:
Copy code
js
vapi.on("message", (message) => {
  console.log("Assistant generated message:", message);
});
This lets you capture the assistant's output in real time, but it may not always provide the full message before it is spoken or interrupted. [See documentation](https://docs.vapi.ai/sdk/web#message) Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Assistant hooks: handling interruptions](https://docs.vapi.ai/assistants/assistant-hooks)
v
Hey, you can access modal output through the modal output server event which is being sent to you as soon as modal generates the output and right before it is sent to the voice service provider. To select this event, go to the VAPI dashboard, then assistant of your choice, select advanced and scroll down to the server messages section, select modal.output server message, fill in your server URL if you haven't, and then you will get it on your server. Let me know if you require further help.
h
how can we configure model output to include structured output? as in response that which is already coming up + fields whataver we want
v
To include structured data fields in your assistant's output, customize the
analysisPlan
with a
structuredDataPrompt
and a
structuredDataSchema
. For example:
Copy code
json
{
  "analysisPlan": {
    "structuredDataPrompt": "Extract the user's name and intent from the call.",
    "structuredDataSchema": {
      "type": "object",
      "properties": {
        "userName": { "type": "string" },
        "intent": { "type": "string" }
      },
      "required": ["userName", "intent"]
    }
  }
}
This will add your specified fields to the structured output, accessible via
call.analysis.structuredData
after the call analysis completes. [See full docs](https://docs.vapi.ai/assistants/call-analysis#structured-data-schema) Source: - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis#structured-data-schema)
h
nah, i am looking for this to be defined for model output in which one property vapi must fix which it has to speak and other one mmust be defined by developers according to our need.
v
Currently, Vapi lets you capture the assistant's generated message before it is spoken by listening to the
message
event in the Web SDK or by using the modal output server event. However, there is no documented way to enforce a real-time model output format with one fixed property for speech and another fully customizable property for developer use.
Could you clarify if your main goal is to access additional structured data alongside the speech output in real time during the call, or are you looking for this structure only in post-call analysis? Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis#structured-data-schema)
h
main goal is to access additional structured data alongside the model output in real time during the call, for example, if i want it to know that the answer talks about india then: "response": "india is a great country", "india":"true" here response will be spoken by voice.
v
Currently, Vapi does not support enforcing a real-time output format with both a fixed speech property and additional developer-defined fields during the call. You can capture the assistant's output as it is generated using the
message
event, but customizing the structure as you described is not documented ([Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message)). Would you like suggestions for possible workarounds, or are you specifically looking for official support for this feature? Sources: - [Vapi Web SDK documentation](https://docs.vapi.ai/sdk/web#message) - [Vapi Call Analysis Documentation](https://docs.vapi.ai/assistants/call-analysis#structured-data-schema)
v
Hey, you cannot get the structural data as it's doesn't exists. This is something that is not configurable through turn-taking models or models in general. So you have to make it work on your side.