Issue while connecting tools with the assistant us...
# support
p
Hi guys, So, I'm trying to create tools and assistants using the API, I was able to build the tools using API, but there's some issue while creating assistants using API, and also how to connect the created assistant with a pre existing tool? Also while creating the API I need the flexibility to choose the below parameters - LLM provider and model - Voice provider, voice id and model(we're using eleven labs for voice) - Server URL - All the settings present in the pictures attached, in the api body, we need to be able to change all the options that I marked - And most importantly, how to connect a pre-existing tool to the newly created assistant. https://cdn.discordapp.com/attachments/1353483316958859284/1353483317239873676/image.png?ex=67e1d111&is=67e07f91&hm=c1a388d44e3c76f02fc6698e3fbbe9478956db78f100f2ff2e35d083c240f178& https://cdn.discordapp.com/attachments/1353483316958859284/1353483317558771814/image.png?ex=67e1d111&is=67e07f91&hm=6ee203ae9ed0371b08a7188ad917b9dd3a75a83b82a6facb60547d0011ed7fab&
v
Hi there! 👋 I notice you're having some issues with creating assistants and connecting tools using the Vapi API. I can help with most of your questions, though I don't have complete information about all the settings in your attached images. Let me walk you through creating assistants, connecting tools, and configuring the various parameters you mentioned: ### Creating Assistants Using the Vapi API To create an assistant, you'll need to send a POST request to the
/assistant
endpoint:
Copy code
bash
curl \
  --request POST \
  --header 'Authorization: Bearer <your-token>' \
  --header 'Content-Type: application/json' \
  --data '{
    "name": "My Assistant",
    "model": {
      "model": "gpt-4o",
      "provider": "openai"
    },
    "voice": {
      "voiceId": "emma",
      "provider": "azure"
    },
    "firstMessage": "Hi, how can I help you today?",
    "firstMessageMode": "assistant-speaks-first"
  }'
  https://api.vapi.ai/assistant
### Connecting Pre-existing Tools to Assistants To connect a pre-existing tool to your assistant, you need to update the assistant's configuration using a PATCH request:
Copy code
bash
curl -X PATCH https://api.vapi.ai/assistant/your-assistant-id \
     -H "Authorization: Bearer your-auth-token" \
     -H "Content-Type: application/json" \
     -d '{
  "model": {
    "provider": "openai",
    "model": "gpt-4o",
    "toolIds": [
      "your-tool-id-here"
    ]
  }
}'
Make sure to include all previously used properties in the model object, such as the system prompt. ### Configuring LLM Provider and Model In your assistant configuration, specify the LLM provider and model in the
model
object:
Copy code
json
"model": {
  "provider": "openai",
  "model": "gpt-4o",
  "systemPrompt": "Your system prompt here"
}
You can use providers like "openai", "custom-llm", or other OpenAI-compatible endpoints. For custom LLMs, you'd include a URL: ```json "model": { "provider": "custom-llm", "model": "gpt-4o",
"url": "https://custom-llm-url/chat/completions" }
Copy code
### Configuring Voice Provider for ElevenLabs

Since you're using ElevenLabs, here's how to configure the voice settings:

```json
"voice": {
  "provider": "11labs",
  "voiceId": "your-elevenlabs-voice-id",
  "model": "eleven_turbo_v2",
  "enableSsmlParsing": true,
  "inputReformattingEnabled": false,
  "inputPreprocessingEnabled": true
}
If you encounter a "couldn't find 11labs voice" error, you'll need to add the voice to the Vapi dashboard using your ElevenLabs API key at https://dashboard.vapi.ai/library/voice. ### Configuring Server URL To configure the server URL, include a
server
object in your request:
Copy code
json
"server": {
  "url": "https://your-server-url.com",
  "secret": "your-secret-key",
  "timeoutSeconds": 20,
  "headers": {
    "custom-header": "header-value"
  }
}
You can set this at the assistant level (which overrides account-level settings) using a PATCH request to
https://api.vapi.ai/assistant/{assistant_id}
. ### Configuring Available Settings from Your Images For the settings I have information about: 1. **Stability**: Set to 0.5 in your voice configuration:
Copy code
json
   "voice": {
     "provider": "11labs",
     "voiceId": "your-voice-id",
     "stability": 0.5
   }
2. **Style Exaggeration**: A higher style value introduces more humanity but may lead to less predictable behavior:
Copy code
json
   "voice": {
     "provider": "11labs",
     "voiceId": "your-voice-id",
     "style": 0
   }
3. **Idle Messages**: Configure idle messages and timeout:
Copy code
json
   "idleMessages": ["Your idle message here"],
   "idleTimeoutSeconds": 5,
   "maxIdleMessages": 1
Is there anything specific about these configurations you'd like me to explain in more detail? Sources: - [How to create assistants using the Vapi API](https://docs.vapi.ai/api-reference/assistants/create)
- [How to connect pre-existing tools to newly created assistants using the Vapi API](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1333975440003563604) - [How to configure LLM provider and model when creating assistants using the Vapi API](https://docs.vapi.ai/customization/provider-keys) - [How to configure voice provider, voice ID, and model when creating assistants using the Vapi API](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1340031852819251384) - [How to configure the server URL when creating assistants using the Vapi API](https://discord.com/channels/1211482211119796234/1211483291191083018/threads/1341692740454584331)
s
@PrinceUstad To create an assistant using the Vapi API, you need to make a POST request to the
/assistant
endpoint with the appropriate configuration. Here's how you can do it:
Copy code
javascript
// Example code for creating an assistant
const response = await fetch("https://api.vapi.ai/assistant", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${YOUR_API_KEY}`, 
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    // Your assistant configuration goes here (see details below)
  })
});

const assistant = await response.json();
## Assistant Configuration Parameters Based on your requirements, here are the key parameters you can configure when creating an assistant: ### 1. LLM Provider and Model
Copy code
javascript
model: {
  provider: "openai", // Options: openai, anthropic, google, etc.
  model: "gpt-4o", // The specific model you want to use
  temperature: 0.5, // Controls randomness (0-2)
  // Optional: messages for system prompts
  messages: [
    {
      content: "Your system prompt here",
      role: "system",
    }
  ]
}
### 2. Voice Provider, Voice ID, and Model
Copy code
javascript
voice: {
  provider: "eleven_labs", // For Eleven Labs
  voiceId: "your_eleven_labs_voice_id",
  // Additional voice settings if needed
}
### 3. Transcriber Configuration
Copy code
javascript
transcriber: {
  provider: "deepgram", // Options: deepgram, assembly_ai, etc.
  model: "nova-3", // Specific model for transcription
  endpointing: 300, // Silence detection in milliseconds
  // Other transcriber settings
}
### 4. Server URL
Copy code
javascript
server: {
  url: "https://your-server-url.com/webhook",
  secret: "your_server_secret", // Optional: Used for authentication
  timeoutSeconds: 20 // Optional: Default is 20
}
### 5. Starting/Ending Messages
Copy code
javascript
name: "Your Assistant Name",
firstMessage: "Hello, how can I help you today?",
voicemailMessage: "Please leave a message.",
endCallMessage: "Thank you for calling. Goodbye."
## Connecting Tools to Assistants The key to connecting pre-existing tools to an assistant is to use the
toolIds
parameter in the assistant's model configuration. This allows you to reference tools that you've already created. First, create your tools using the
/tool
endpoint:
Copy code
javascript
// Example of creating a tool
const toolResponse = await fetch("https://api.vapi.ai/tool", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${YOUR_API_KEY}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify({
    type: "function", // or other tool types
    function: {
      name: "your_function_name",
      description: "Description of what this function does",
      parameters: {
        type: "object",
        properties: {
          // Your function parameters
        },
        required: ["param1", "param2"]
      }
    },
    server: {
      url: "https://your-api-endpoint.com/function",
      // Optional configuration
    }
  })
});

const tool = await toolResponse.json();
const toolId = tool.id; // Save this ID to use in your assistant
Then, when creating or updating your assistant, reference these tool IDs:
Copy code
javascript
model: {
  provider: "openai",
  model: "gpt-4o",
  // Other model settings
  
  // Connect pre-existing tools using their IDs
  toolIds: ["tool-id-1", "tool-id-2", "tool-id-3"]
  
  // You can also define inline tools if needed
  tools: [
    {
      type: "function",
      function: {
        // Define inline tool
      }
    }
  ]
}
## Complete Example Here's a comprehensive example that puts it all together:
Copy code
javascript
const assistantConfig = {
  name: "Customer Support Assistant",
  model: {
    provider: "openai",
    model: "gpt-4o",
    temperature: 0.7,
    messages: [
      {
        content: "You are a helpful customer support assistant.",
        role: "system"
      }
    ],
    // Connect pre-existing tools
    toolIds: ["your-previously-created-tool-id"]
  },
  voice: {
    provider: "eleven_labs",
    voiceId: "your-voice-id"
  },
  transcriber: {
    provider: "deepgram",
    model: "nova-3",
    endpointing: 300
  },
  server: {
    url: "https://your-webhook-server.com/api",
    secret: "your-secret-key"
  },
  firstMessage: "Hello, I'm your virtual assistant. How can I help you today?",
  voicemailMessage: "Sorry I missed you. Please leave a message after the tone.",
  endCallMessage: "Thank you for contacting us. Have a great day!"
};

// Create the assistant
const response = await fetch("https://api.vapi.ai/assistant", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${YOUR_API_KEY}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify(assistantConfig)
});

const assistant = await response.json();
console.log("Created assistant:", assistant);
## Troubleshooting Tips If you're having issues with creating assistants or connecting tools: 1. Make sure your authorization token is correct and has the necessary permissions 2. Verify that any tool IDs you're referencing actually exist in your account 3. Check that your server URLs are accessible and properly configured 4. Ensure all required fields in the assistant configuration are provided 5. Review the error messages in the API response for specific guidance ## Next Steps To better understand what might be causing your specific issues, I'd recommend: 1. Try creating a minimal assistant first without tools to ensure the basic API call works 2. Add tools one by one to identify if a specific tool is causing issues 3. Check your server logs to see if there are any connection or authentication issues 4. Verify that your tool configurations are valid and properly formatted Would you like a more detailed example for any specific aspect of creating assistants or connecting tools?