DomnulZeu
03/31/2025, 10:56 AMDomnulZeu
03/31/2025, 10:57 AMVapi Support Bot
03/31/2025, 10:59 AMmeta llama 3.3 70B
with the Vapi dashboard and Together AI, specifically with the model input field for custom models [Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342659054115356803).
- While several llama
models are listed as available within the Vapi ecosystem (including llama-3.3-70b-versatile
), there have been some issues with specific llama
models [Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1334534504324403230).
- Vapi removed certain llama3
models with "tool-use-preview" in their names from GroqModel.model
, suggesting potential issues with tool use in those specific models [Vapi changelog](https://docs.vapi.ai/changelog/2024/12/30).
- There are known issues with Vapi defined tools and the chat completion endpoint, where "Vapi built tools get removed from the chat completions endpoint" [Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1339871158975336498).
## API Endpoint/Server Setup Issues
Your API endpoint or server setup could be causing glitches:Vapi Support Bot
03/31/2025, 10:59 AMsynchronous
instead of async
[Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1342315282198298729).
- Set the async
parameter to true
within the configuration of the API call [Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333880245681393756).
2. **Examine Logs and Errors**:
- Check Vapi logs to identify any errors in the automation flow [Vapi documentation](https://docs.vapi.ai/tools/GHL).
- Access logs through the VAPI Dashboard by navigating to Logs or Reporting Section [Discord conversation](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1334280135683084328).
3. **Ensure Correct Response Format**:Vapi Support Bot
03/31/2025, 10:59 AMVapi Support Bot
03/31/2025, 10:59 AMDomnulZeu
03/31/2025, 11:55 AMDomnulZeu
03/31/2025, 11:55 AMDomnulZeu
03/31/2025, 11:55 AMKyle
03/31/2025, 2:44 PMDomnulZeu
03/31/2025, 3:24 PMDomnulZeu
03/31/2025, 7:02 PMKyle
03/31/2025, 9:47 PM"message": "Failed to generate tool_calls. Please adjust your prompt. See 'failed_generation' for more details. Exception: No valid function calls generated",
"type": "invalid_request_error",
"param": "",
"code": "tool_use_failed",
"failed_generation": "checkMeetingSlots(projectId=67c00337a34d504e2593ccbe, startTime=2025-04-01T11:00:00+03:00, endTime=2025-04-01T11:30:00+03:00)"
},
"code": "tool_use_failed",
"param": "",
"type": "invalid_request_error",
"message": "Failed to generate tool_calls. Please adjust your prompt. See 'failed_generation' for more details. Exception: No valid function calls generated",
The most immediate solution is to use a different model provider like OpenAI or Anthropic that properly supports the expected JSON format for function/tool calls.
You can try to report this issue to Cerebras as it appears to be a limitation or bug in their implementation of function calling in the Llama-3.3-70b model.Kyle
03/31/2025, 9:47 PMmukul
04/01/2025, 4:23 AMmukul
04/01/2025, 4:23 AMmukul
04/01/2025, 6:45 AMDomnulZeu
04/01/2025, 5:16 PMKyle
04/01/2025, 10:09 PMDomnulZeu
04/01/2025, 10:38 PMDomnulZeu
04/01/2025, 10:39 PMDomnulZeu
04/01/2025, 10:39 PMDomnulZeu
04/01/2025, 10:39 PMDomnulZeu
04/01/2025, 10:39 PMDomnulZeu
04/02/2025, 8:53 AMDomnulZeu
04/02/2025, 9:23 AMDomnulZeu
04/03/2025, 5:49 AMDomnulZeu
04/03/2025, 12:42 PMDomnulZeu
04/03/2025, 12:42 PMDomnulZeu
04/04/2025, 11:04 AMKyle
04/06/2025, 12:40 PMDomnulZeu
04/07/2025, 8:30 AMKyle
04/08/2025, 8:42 PMKyle
04/08/2025, 9:10 PM