Lucio
03/24/2025, 10:02 AM{
"id": "chatcmpl-BEYW9n5na5tA6539KWpUH9DwkSRIx",
"choices": [
{
"delta": {
"content": "Hello",
"function_call": null,
"refusal": null,
"role": null,
"tool_calls": null
},
"finish_reason": null,
"index": 0,
"logprobs": null
}
],
"created": 1742808977,
"model": "gpt-4o-mini-2024-07-18",
"object": "chat.completion.chunk",
"service_tier": "default",
"system_fingerprint": "fp_b8bc95a0ac",
"usage": null
}
As a next step, I would like to integrate VAPI with an AI agent built using LangGraph. The agent is hosted in LangGraph Platform.
The idea is that VAPI contacts my "/assistant/chat/completions" endpoint on my server. My server processes the request by interacting with the LangGraph agent using the LangGraph SDK.
It is unclear to me what is the format that VAPI expects as response. Do I need to reformat the LangGraph response to follow the OpenAI format or is there a standard format that VAPI support that can be applied to different types of integrations?Vapi Support Bot
03/24/2025, 10:04 AM/assistant/chat/completions
endpoint you mentioned, but I can help with what we know about VAPI's response format expectations.
Based on the information I found, VAPI is designed to be flexible and can integrate with various custom LLMs. When integrating a custom LLM with VAPI, the response needs to be formatted according to VAPI's expected structure.
From your example, you've already successfully integrated VAPI with a custom LLM using the OpenAI format. This is a good sign, as [VAPI documentation](https://docs.vapi.ai/server-url/events) indicates that VAPI fully supports OpenAI's function calling API.
When formatting responses for VAPI:
1. For OpenAI integrations, the response should be formatted according to VAPI's expected structure and sent back as a JSON object, as you've already done successfully. According to [VAPI documentation](https://docs.vapi.ai/customization/custom-llm/using-your-server), the response from OpenAI is received and formatted according to VAPI's expected structure.
2. For tool calls, VAPI expects a specific JSON format:
json
{
"results": [
{
"toolCallId": "X",
"result": "Y"
}
]
}
as mentioned in the [VAPI documentation](https://docs.vapi.ai/tools/custom-tools).
Since you've already successfully integrated with OpenAI format, and VAPI supports OpenAI's function calling API, it's likely that reformatting the LangGraph response to follow the OpenAI format (as you showed in your example) would be the safest approach.
Would you like me to help with any specific aspects of the response formatting or the integration process?
Sources:
- [VAPI documentation](https://docs.vapi.ai/server-url/events)
- [VAPI documentation](https://docs.vapi.ai/customization/custom-llm/using-your-server)
- [VAPI documentation](https://docs.vapi.ai/tools/custom-tools)Vapi Support Bot
03/24/2025, 10:04 AMjson
{
"results": [
{
"toolCallId": "X",
"result": "Y"
}
]
}
It's important to ensure that your response format strictly adheres to VAPI's expected schema, as [small deviations in key names or data structure can cause parsing issues](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1336440478727667844).
One note of caution: I found information suggesting that [the custom LLM server approach might not work straightforwardly for connecting Langchain/LangGraph](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1341715498793963541), so you may need to explore alternative integration methods.
Is there anything specific about the response format you'd like me to clarify further?
Sources:Vapi Support Bot
03/24/2025, 10:04 AMKings Big
03/24/2025, 11:01 PMKings Big
03/24/2025, 11:02 PMLucio
03/27/2025, 7:51 PMLucio
03/27/2025, 7:51 PMKings Big
03/29/2025, 2:29 PM