lsilva
12/05/2024, 11:01 AMVapi
12/05/2024, 11:04 AMauthenticationPlan
object in your POST request body. When making a POST request to add a key for your custom LLM, you need to ensure that you're including all the required fields in the JSON body of your request.
The correct URL to make a POST request to add a key for your custom LLM is indeed https://api.vapi.ai/credentials
, but you need to include the authenticationPlan
object along with any other necessary data.
Here's an example of what the JSON body should include:
json
{
"name": "YourKeyName",
"authenticationPlan": {
// ... specify the authentication plan details here ...
}
// ... include any other required fields ...
}
Make sure to replace the placeholder comments with actual values that are appropriate for your custom LLM setup. If you're unsure about the structure of the authenticationPlan
object or other required fields, please refer to the Vapi documentation or ask support for more info on this.Vapi
12/05/2024, 11:04 AMVapi
12/05/2024, 11:05 AMlsilva
12/05/2024, 11:05 AMVapi
12/05/2024, 11:05 AMlsilva
12/05/2024, 11:05 AMlsilva
12/05/2024, 11:05 AMVapi
12/05/2024, 11:06 AMauthenticationPlan
object is part of the body you need to include when making a POST request to add a credential for your custom language model (LLM). While I don't have the exact schema for the authenticationPlan
object, it typically contains the details about how the API will authenticate requests for your custom LLM.
Here's a general structure you might use, but please note that you should consult the VAPI documentation or support for the exact fields required for your specific use case:
json
{
"name": "Your Custom LLM Credential Name",
"authenticationPlan": {
"type": "API_KEY", // or another type of authentication your LLM requires
"apiKey": "your-api-key-here",
// Additional fields depending on the authentication type
}
// Include other required fields as per the VAPI documentation
}
In this example, type
could be an API key, OAuth token, or any other method your LLM uses for authentication. The apiKey
field is a placeholder for your actual API key.
If you need the exact structure and fields for the authenticationPlan
object for VAPI, please refer to the official documentation or reach out to their support team for assistance.lsilva
12/05/2024, 11:07 AMVapi
12/05/2024, 11:07 AMauthenticationPlan
and other necessary fields:
json
{
"name": "YourKeyName",
"provider": "openai",
"authenticationPlan": {
"type": "API_KEY",
"apiKey": "your-openai-api-key"
},
"model": "gpt-4o", // specify the model identifier if required
// ... any other required fields ...
}
In this JSON body:
- name
is a custom name for your credential.
- provider
specifies the provider of the LLM, which in this case is OpenAI.
- authenticationPlan
contains the details on how the API will authenticate requests, including the type of authentication and the actual API key.
- model
specifies the particular model of the LLM you are using, such as "gpt-4o" for OpenAI's GPT-4 optimized model.
Please ensure that you replace "your-openai-api-key"
with your actual OpenAI API key and that you check the documentation for any additional or optional fields that may be required for your specific use case. If you're unsure about the exact fields or values to include, it's best to consult the VAPI documentation or contact their support team for detailed guidance.lsilva
12/05/2024, 11:08 AMVapi
12/05/2024, 11:08 AMjson
{
"name": "Your Custom LLM Credential Name",
"provider": "custom",
"authenticationPlan": {
"type": "API_KEY", // or the type of authentication your server requires
"apiKey": "your-server-api-key"
},
"modelUrl": "https://yourserver.com/path-to-your-model", // The endpoint where your custom LLM is accessible
// ... any other required fields ...
}
In this JSON body:
- name
is a custom name for your credential.
- provider
is set to "custom" to indicate that it's a custom LLM provider.
- authenticationPlan
contains the details on how the API will authenticate requests to your server.
- modelUrl
is the endpoint URL where your custom LLM is hosted on your server.
Make sure to replace "your-server-api-key"
with the actual API key or token required to authenticate requests to your server, and "https://yourserver.com/path-to-your-model"
with the actual URL to your custom LLM.
Remember to consult the VAPI documentation for the exact requirements and to ensure that you're including all necessary fields in your request. If you need further assistance, don't hesitate to reach out to VAPI support.lsilva
12/05/2024, 11:13 AMVapi
12/05/2024, 11:13 AMauthenticationPlan
object in your request is expected to contain clientId
and clientSecret
fields, both of which must be strings. These fields are typically used in OAuth authentication schemes, where:
- clientId
is a public identifier for applications.
- clientSecret
is a secret known only to the application and the authorization server.
Here's how you might structure your authenticationPlan
with these fields:
json
{
"authenticationPlan": {
"clientId": "your-client-id",
"clientSecret": "your-client-secret"
// ... other fields as required ...
}
// ... other parts of the request ...
}
To find these values:
1. For OpenAI or other third-party services: You would obtain the clientId
and clientSecret
from the service provider when you register your application with them. For OpenAI, you typically only need an API key, so if you're seeing this error, it might be due to a mismatch between the expected authentication method and the one you're trying to use.
2. For your custom server: If you've implemented OAuth on your server, you would generate these credentials yourself as part of setting up the OAuth service. You would then provide these credentials to clients that need to authenticate with your server.
If you're not using OAuth and are seeing this error, it's possible that you've chosen the wrong authentication type in your request. Make sure that the authentication type you specify matches the method your server or service provider uses. If you're unsure about what values to use or how to obtain them, you should refer to the documentation of the API you're trying to authenticate with or contact their support for guidance.lsilva
12/05/2024, 11:14 AMlsilva
12/05/2024, 11:15 AMVapi
12/05/2024, 11:15 AMapiKey
should likely be inside the authenticationPlan
object.
2. The authenticationPlan
object is missing the clientId
and clientSecret
fields, which are required for OAuth2 authentication.
3. The type
field within authenticationPlan
should match the authentication method you're using. If you're using an API key from OpenAI, the type might not be oauth2
. OpenAI typically uses API keys for authentication, not OAuth2.
4. The url
field inside authenticationPlan
is not typically part of OAuth2 authentication and seems to be the endpoint for OpenAI's API, which should not be included in the authenticationPlan
.
Here's a revised version of your request body, assuming you're using an API key for authentication:
json
{
"provider": "custom-llm",
"authenticationPlan": {
"type": "API_KEY",
"apiKey": "my_key"
}
// ... any other required fields ...
}
If you are indeed trying to use OAuth2 (which is less common for OpenAI's API), you would need to include clientId
and `clientSecret`:
json
{
"provider": "custom-llm",
"authenticationPlan": {
"type": "oauth2",
"clientId": "your-client-id",
"clientSecret": "your-client-secret"
}
// ... any other required fields ...
}
Make sure to replace "your-client-id"
and "your-client-secret"
with the actual credentials provided by the OAuth2 service you're using.
Please verify the authentication method required by your custom LLM or OpenAI and adjust the request body accordingly. If you're still unsure, consult the VAPI documentation or contact their support for further assistance.lsilva
12/05/2024, 11:27 AM{
"message": "OAuth2 validation failed: Request failed with status code 401",
"error": "Bad Request",
"statusCode": 400
}
lsilva
12/05/2024, 11:30 AMVapi
12/05/2024, 11:30 AMcustom-llm
provider for testing purposes and are directly using the OpenAI URL, it's important to ensure that your authentication details are correct. The 401 status code indicates that the request has not been applied because it lacks valid authentication credentials for the target resource.
Here's what you need to check:
1. **API Key**: Make sure that the API key you're using is correct and has the necessary permissions to access the OpenAI API.
2. **Endpoint**: If you're testing with OpenAI's URL, ensure that you're using the correct endpoint for the API call you're making.
3. **Headers**: When making requests to OpenAI, you need to include the Authorization
header with your API key. The format should be Authorization: Bearer YOUR_API_KEY
.
Here's an example of how you might structure your request to OpenAI for testing:
json
{
"provider": "custom-llm",
"authenticationPlan": {
"type": "API_KEY",
"apiKey": "my_openai_api_key"
},
"modelUrl": "https://api.openai.com/v1/models/gpt-3.5-turbo/completions" // Use the correct endpoint for the model you're accessing
}
In this example, replace "my_openai_api_key"
with your actual OpenAI API key, and make sure the modelUrl
is the correct endpoint for the model you're trying to use.
If you're still encountering issues, double-check the following:
- The API key is valid and has not expired.
- You have not exceeded any usage limits associated with your API key.
- The request is formatted correctly according to the OpenAI API documentation.
If everything seems correct and you're still getting a 401 error, you may need to contact OpenAI support for further assistance. Remember that for production use, you'll need to switch from the OpenAI URL to your custom LLM endpoint and ensure that the authentication matches your server's requirements.