PrinceUstad
03/23/2025, 9:39 PMVapi Support Bot
03/23/2025, 9:42 PM/assistant
endpoint:
bash
curl \
--request POST \
--header 'Authorization: Bearer <your-token>' \
--header 'Content-Type: application/json' \
--data '{
"name": "My Assistant",
"model": {
"model": "gpt-4o",
"provider": "openai"
},
"voice": {
"voiceId": "emma",
"provider": "azure"
},
"firstMessage": "Hi, how can I help you today?",
"firstMessageMode": "assistant-speaks-first"
}'
https://api.vapi.ai/assistant
### Connecting Pre-existing Tools to Assistants
To connect a pre-existing tool to your assistant, you need to update the assistant's configuration using a PATCH request:
bash
curl -X PATCH https://api.vapi.ai/assistant/your-assistant-id \
-H "Authorization: Bearer your-auth-token" \
-H "Content-Type: application/json" \
-d '{
"model": {
"provider": "openai",
"model": "gpt-4o",
"toolIds": [
"your-tool-id-here"
]
}
}'
Make sure to include all previously used properties in the model object, such as the system prompt.
### Configuring LLM Provider and Model
In your assistant configuration, specify the LLM provider and model in the model
object:
json
"model": {
"provider": "openai",
"model": "gpt-4o",
"systemPrompt": "Your system prompt here"
}
You can use providers like "openai", "custom-llm", or other OpenAI-compatible endpoints. For custom LLMs, you'd include a URL:
```json
"model": {
"provider": "custom-llm",
"model": "gpt-4o",Vapi Support Bot
03/23/2025, 9:42 PM### Configuring Voice Provider for ElevenLabs
Since you're using ElevenLabs, here's how to configure the voice settings:
```json
"voice": {
"provider": "11labs",
"voiceId": "your-elevenlabs-voice-id",
"model": "eleven_turbo_v2",
"enableSsmlParsing": true,
"inputReformattingEnabled": false,
"inputPreprocessingEnabled": true
}
If you encounter a "couldn't find 11labs voice" error, you'll need to add the voice to the Vapi dashboard using your ElevenLabs API key at https://dashboard.vapi.ai/library/voice.
### Configuring Server URL
To configure the server URL, include a server
object in your request:
json
"server": {
"url": "https://your-server-url.com",
"secret": "your-secret-key",
"timeoutSeconds": 20,
"headers": {
"custom-header": "header-value"
}
}
You can set this at the assistant level (which overrides account-level settings) using a PATCH request to https://api.vapi.ai/assistant/{assistant_id}
.
### Configuring Available Settings from Your Images
For the settings I have information about:
1. **Stability**: Set to 0.5 in your voice configuration:
json
"voice": {
"provider": "11labs",
"voiceId": "your-voice-id",
"stability": 0.5
}
2. **Style Exaggeration**: A higher style value introduces more humanity but may lead to less predictable behavior:
json
"voice": {
"provider": "11labs",
"voiceId": "your-voice-id",
"style": 0
}
3. **Idle Messages**: Configure idle messages and timeout:
json
"idleMessages": ["Your idle message here"],
"idleTimeoutSeconds": 5,
"maxIdleMessages": 1
Is there anything specific about these configurations you'd like me to explain in more detail?
Sources:
- [How to create assistants using the Vapi API](https://docs.vapi.ai/api-reference/assistants/create)Vapi Support Bot
03/23/2025, 9:42 PMShubham Bajaj
03/24/2025, 10:27 PM/assistant
endpoint with the appropriate configuration. Here's how you can do it:
javascript
// Example code for creating an assistant
const response = await fetch("https://api.vapi.ai/assistant", {
method: "POST",
headers: {
"Authorization": `Bearer ${YOUR_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
// Your assistant configuration goes here (see details below)
})
});
const assistant = await response.json();
Shubham Bajaj
03/24/2025, 10:28 PMjavascript
model: {
provider: "openai", // Options: openai, anthropic, google, etc.
model: "gpt-4o", // The specific model you want to use
temperature: 0.5, // Controls randomness (0-2)
// Optional: messages for system prompts
messages: [
{
content: "Your system prompt here",
role: "system",
}
]
}
### 2. Voice Provider, Voice ID, and Model
javascript
voice: {
provider: "eleven_labs", // For Eleven Labs
voiceId: "your_eleven_labs_voice_id",
// Additional voice settings if needed
}
### 3. Transcriber Configuration
javascript
transcriber: {
provider: "deepgram", // Options: deepgram, assembly_ai, etc.
model: "nova-3", // Specific model for transcription
endpointing: 300, // Silence detection in milliseconds
// Other transcriber settings
}
### 4. Server URL
javascript
server: {
url: "https://your-server-url.com/webhook",
secret: "your_server_secret", // Optional: Used for authentication
timeoutSeconds: 20 // Optional: Default is 20
}
### 5. Starting/Ending Messages
javascript
name: "Your Assistant Name",
firstMessage: "Hello, how can I help you today?",
voicemailMessage: "Please leave a message.",
endCallMessage: "Thank you for calling. Goodbye."
Shubham Bajaj
03/24/2025, 10:28 PMtoolIds
parameter in the assistant's model configuration. This allows you to reference tools that you've already created.
First, create your tools using the /tool
endpoint:
javascript
// Example of creating a tool
const toolResponse = await fetch("https://api.vapi.ai/tool", {
method: "POST",
headers: {
"Authorization": `Bearer ${YOUR_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify({
type: "function", // or other tool types
function: {
name: "your_function_name",
description: "Description of what this function does",
parameters: {
type: "object",
properties: {
// Your function parameters
},
required: ["param1", "param2"]
}
},
server: {
url: "https://your-api-endpoint.com/function",
// Optional configuration
}
})
});
const tool = await toolResponse.json();
const toolId = tool.id; // Save this ID to use in your assistant
Then, when creating or updating your assistant, reference these tool IDs:
javascript
model: {
provider: "openai",
model: "gpt-4o",
// Other model settings
// Connect pre-existing tools using their IDs
toolIds: ["tool-id-1", "tool-id-2", "tool-id-3"]
// You can also define inline tools if needed
tools: [
{
type: "function",
function: {
// Define inline tool
}
}
]
}
Shubham Bajaj
03/24/2025, 10:28 PMjavascript
const assistantConfig = {
name: "Customer Support Assistant",
model: {
provider: "openai",
model: "gpt-4o",
temperature: 0.7,
messages: [
{
content: "You are a helpful customer support assistant.",
role: "system"
}
],
// Connect pre-existing tools
toolIds: ["your-previously-created-tool-id"]
},
voice: {
provider: "eleven_labs",
voiceId: "your-voice-id"
},
transcriber: {
provider: "deepgram",
model: "nova-3",
endpointing: 300
},
server: {
url: "https://your-webhook-server.com/api",
secret: "your-secret-key"
},
firstMessage: "Hello, I'm your virtual assistant. How can I help you today?",
voicemailMessage: "Sorry I missed you. Please leave a message after the tone.",
endCallMessage: "Thank you for contacting us. Have a great day!"
};
// Create the assistant
const response = await fetch("https://api.vapi.ai/assistant", {
method: "POST",
headers: {
"Authorization": `Bearer ${YOUR_API_KEY}`,
"Content-Type": "application/json"
},
body: JSON.stringify(assistantConfig)
});
const assistant = await response.json();
console.log("Created assistant:", assistant);
## Troubleshooting Tips
If you're having issues with creating assistants or connecting tools:
1. Make sure your authorization token is correct and has the necessary permissions
2. Verify that any tool IDs you're referencing actually exist in your account
3. Check that your server URLs are accessible and properly configured
4. Ensure all required fields in the assistant configuration are provided
5. Review the error messages in the API response for specific guidance
## Next Steps
To better understand what might be causing your specific issues, I'd recommend:
1. Try creating a minimal assistant first without tools to ensure the basic API call works
2. Add tools one by one to identify if a specific tool is causing issues
3. Check your server logs to see if there are any connection or authentication issues
4. Verify that your tool configurations are valid and properly formatted
Would you like a more detailed example for any specific aspect of creating assistants or connecting tools?