Credential endpoint
# support
l
What is the url to make a post request to add a key for my custom llm, when I use https://api.vapi.ai/credential, I get a 400 error of Bad Request saying "message": [ "authenticationPlan must be an object" ],
v
@lsilva It seems like you might be missing the
authenticationPlan
object in your POST request body. When making a POST request to add a key for your custom LLM, you need to ensure that you're including all the required fields in the JSON body of your request. The correct URL to make a POST request to add a key for your custom LLM is indeed
https://api.vapi.ai/credentials
, but you need to include the
authenticationPlan
object along with any other necessary data. Here's an example of what the JSON body should include:
Copy code
json
{
  "name": "YourKeyName",
  "authenticationPlan": {
    // ... specify the authentication plan details here ...
  }
  // ... include any other required fields ...
}
Make sure to replace the placeholder comments with actual values that are appropriate for your custom LLM setup. If you're unsure about the structure of the
authenticationPlan
object or other required fields, please refer to the Vapi documentation or ask support for more info on this.
You can continue this conversation by mentioning me in the message.
Got it. Tagging @User for additional help.
l
@Vapi
v
@lsilva Please ask support for more info on this.
l
What is the body of authenticationPlan
@Vapi What is the body of authenticationPlan
v
@lsilva The
authenticationPlan
object is part of the body you need to include when making a POST request to add a credential for your custom language model (LLM). While I don't have the exact schema for the
authenticationPlan
object, it typically contains the details about how the API will authenticate requests for your custom LLM. Here's a general structure you might use, but please note that you should consult the VAPI documentation or support for the exact fields required for your specific use case:
Copy code
json
{
  "name": "Your Custom LLM Credential Name",
  "authenticationPlan": {
    "type": "API_KEY", // or another type of authentication your LLM requires
    "apiKey": "your-api-key-here",
    // Additional fields depending on the authentication type
  }
  // Include other required fields as per the VAPI documentation
}
In this example,
type
could be an API key, OAuth token, or any other method your LLM uses for authentication. The
apiKey
field is a placeholder for your actual API key. If you need the exact structure and fields for the
authenticationPlan
object for VAPI, please refer to the official documentation or reach out to their support team for assistance.
l
@Vapi dont i need to add another fields, like the provider, its a key from open ai btw
v
@lsilva Yes, if you are adding a key for a custom LLM from OpenAI, you will need to include additional fields that specify the provider and potentially other relevant information such as the model you intend to use. Here's an example of what the body might look like, including the
authenticationPlan
and other necessary fields:
Copy code
json
{
  "name": "YourKeyName",
  "provider": "openai",
  "authenticationPlan": {
    "type": "API_KEY",
    "apiKey": "your-openai-api-key"
  },
  "model": "gpt-4o", // specify the model identifier if required
  // ... any other required fields ...
}
In this JSON body: -
name
is a custom name for your credential. -
provider
specifies the provider of the LLM, which in this case is OpenAI. -
authenticationPlan
contains the details on how the API will authenticate requests, including the type of authentication and the actual API key. -
model
specifies the particular model of the LLM you are using, such as "gpt-4o" for OpenAI's GPT-4 optimized model. Please ensure that you replace
"your-openai-api-key"
with your actual OpenAI API key and that you check the documentation for any additional or optional fields that may be required for your specific use case. If you're unsure about the exact fields or values to include, it's best to consult the VAPI documentation or contact their support team for detailed guidance.
l
@Vapi what if my provider will be a custom llm i will build on my server side using the open ai api
v
@lsilva If you're building a custom LLM on your server side using the OpenAI API, and you want to integrate it with VAPI, you would use the Custom LLM provider option. In this case, you would specify the URL to your model in your API request, along with the necessary authentication details to access your server. Here's an example of what the body of your POST request might look like when adding a key for your custom LLM hosted on your server:
Copy code
json
{
  "name": "Your Custom LLM Credential Name",
  "provider": "custom",
  "authenticationPlan": {
    "type": "API_KEY", // or the type of authentication your server requires
    "apiKey": "your-server-api-key"
  },
  "modelUrl": "https://yourserver.com/path-to-your-model", // The endpoint where your custom LLM is accessible
  // ... any other required fields ...
}
In this JSON body: -
name
is a custom name for your credential. -
provider
is set to "custom" to indicate that it's a custom LLM provider. -
authenticationPlan
contains the details on how the API will authenticate requests to your server. -
modelUrl
is the endpoint URL where your custom LLM is hosted on your server. Make sure to replace
"your-server-api-key"
with the actual API key or token required to authenticate requests to your server, and
"https://yourserver.com/path-to-your-model"
with the actual URL to your custom LLM. Remember to consult the VAPI documentation for the exact requirements and to ensure that you're including all necessary fields in your request. If you need further assistance, don't hesitate to reach out to VAPI support.
l
@Vapi what are this fields and how can i find them: "message": [ "authenticationPlan.clientId must be a string", "authenticationPlan.clientSecret must be a string" ],
v
@lsilva The error messages you're seeing indicate that the
authenticationPlan
object in your request is expected to contain
clientId
and
clientSecret
fields, both of which must be strings. These fields are typically used in OAuth authentication schemes, where: -
clientId
is a public identifier for applications. -
clientSecret
is a secret known only to the application and the authorization server. Here's how you might structure your
authenticationPlan
with these fields:
Copy code
json
{
  "authenticationPlan": {
    "clientId": "your-client-id",
    "clientSecret": "your-client-secret"
    // ... other fields as required ...
  }
  // ... other parts of the request ...
}
To find these values: 1. For OpenAI or other third-party services: You would obtain the
clientId
and
clientSecret
from the service provider when you register your application with them. For OpenAI, you typically only need an API key, so if you're seeing this error, it might be due to a mismatch between the expected authentication method and the one you're trying to use. 2. For your custom server: If you've implemented OAuth on your server, you would generate these credentials yourself as part of setting up the OAuth service. You would then provide these credentials to clients that need to authenticate with your server. If you're not using OAuth and are seeing this error, it's possible that you've chosen the wrong authentication type in your request. Make sure that the authentication type you specify matches the method your server or service provider uses. If you're unsure about what values to use or how to obtain them, you should refer to the documentation of the API you're trying to authenticate with or contact their support for guidance.
l
@Vapi this is y body, is it ok? { "provider": "custom-llm", "apiKey": "my_key", "authenticationPlan": { "type": "oauth2", "url": "https://api.openai.com/v1/chat/completions" } }
@Vapi this is my body, is it ok? { "provider": "custom-llm", "apiKey": "my_key", "authenticationPlan": { "type": "oauth2", "url": "https://api.openai.com/v1/chat/completions" } }
v
@lsilva The body of your request seems to have a few issues that need to be addressed: 1. The
apiKey
should likely be inside the
authenticationPlan
object. 2. The
authenticationPlan
object is missing the
clientId
and
clientSecret
fields, which are required for OAuth2 authentication. 3. The
type
field within
authenticationPlan
should match the authentication method you're using. If you're using an API key from OpenAI, the type might not be
oauth2
. OpenAI typically uses API keys for authentication, not OAuth2. 4. The
url
field inside
authenticationPlan
is not typically part of OAuth2 authentication and seems to be the endpoint for OpenAI's API, which should not be included in the
authenticationPlan
. Here's a revised version of your request body, assuming you're using an API key for authentication:
Copy code
json
{
  "provider": "custom-llm",
  "authenticationPlan": {
    "type": "API_KEY",
    "apiKey": "my_key"
  }
  // ... any other required fields ...
}
If you are indeed trying to use OAuth2 (which is less common for OpenAI's API), you would need to include
clientId
and `clientSecret`:
Copy code
json
{
  "provider": "custom-llm",
  "authenticationPlan": {
    "type": "oauth2",
    "clientId": "your-client-id",
    "clientSecret": "your-client-secret"
  }
  // ... any other required fields ...
}
Make sure to replace
"your-client-id"
and
"your-client-secret"
with the actual credentials provided by the OAuth2 service you're using. Please verify the authentication method required by your custom LLM or OpenAI and adjust the request body accordingly. If you're still unsure, consult the VAPI documentation or contact their support for further assistance.
l
Copy code
{
  "message": "OAuth2 validation failed: Request failed with status code 401",
  "error": "Bad Request",
  "statusCode": 400
}
@Vapi Now i can access the url but im using the custom-llm but just for testing im using the url from openai
v
@lsilva If you're using the
custom-llm
provider for testing purposes and are directly using the OpenAI URL, it's important to ensure that your authentication details are correct. The 401 status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource. Here's what you need to check: 1. **API Key**: Make sure that the API key you're using is correct and has the necessary permissions to access the OpenAI API. 2. **Endpoint**: If you're testing with OpenAI's URL, ensure that you're using the correct endpoint for the API call you're making. 3. **Headers**: When making requests to OpenAI, you need to include the
Authorization
header with your API key. The format should be
Authorization: Bearer YOUR_API_KEY
. Here's an example of how you might structure your request to OpenAI for testing:
Copy code
json
{
  "provider": "custom-llm",
  "authenticationPlan": {
    "type": "API_KEY",
    "apiKey": "my_openai_api_key"
  },
  "modelUrl": "https://api.openai.com/v1/models/gpt-3.5-turbo/completions" // Use the correct endpoint for the model you're accessing
}
In this example, replace
"my_openai_api_key"
with your actual OpenAI API key, and make sure the
modelUrl
is the correct endpoint for the model you're trying to use. If you're still encountering issues, double-check the following: - The API key is valid and has not expired. - You have not exceeded any usage limits associated with your API key. - The request is formatted correctly according to the OpenAI API documentation. If everything seems correct and you're still getting a 401 error, you may need to contact OpenAI support for further assistance. Remember that for production use, you'll need to switch from the OpenAI URL to your custom LLM endpoint and ensure that the authentication matches your server's requirements.
6 Views