Pigeon
04/29/2025, 11:34 AMreturn {
"access_token": access_token,
"token_type": "bearer",
"expires_in": expires_in_seconds,
}
I get requests to my /token endpoint before every server message.
Do you not cache the token?
Or am I not returning somethig for it to be cached?
https://cdn.discordapp.com/attachments/1366739411219251310/1366739411538284645/image.png?ex=68120ac4&is=6810b944&hm=a447e9cf25cb904c98950ca70b19c37311bfd29c7ee81793612d1565520c9cc0&Vapi
04/29/2025, 7:37 PMPigeon
04/30/2025, 5:46 AMPigeon
04/30/2025, 7:42 AM84d19ebb-4627-4a2f-8e53-2f4b0f6e286d
2c26c07e-0658-4aac-9009-641afee8cd35Vapi
04/30/2025, 2:12 PMVapi
04/30/2025, 2:29 PMVapi
04/30/2025, 2:38 PMPigeon
04/30/2025, 2:39 PMPigeon
04/30/2025, 2:40 PMPigeon
04/30/2025, 2:43 PMVapi
05/01/2025, 8:47 AMVapi
05/01/2025, 3:19 PMexamples
id: d19a8c48-b3fd-430b-a472-f55621408df2
orgId: 6da17b03-ce91-460f-a854-478031f357b2Pigeon
05/05/2025, 7:23 AMCustomLLMCredential and WebhookCredential in https://api.vapi.ai/api
They do have id and orgId parameters, which i'd expect be populated by VAPI when creating Credential, because I get 400 when trying to update Assistant > credentials
vapi.core.api_error.ApiError: status_code: 400, body: {'message': ['each value in credentials.property id should not exist'], 'error': 'Bad Request', 'statusCode': 400}
vapi.core.api_error.ApiError: status_code: 400, body: {'message': ['each value in credentials.property orgId should not exist'], 'error': 'Bad Request', 'statusCode': 400}
Probably because `CreateWebhookCredentialDTO`/`UpdateWebhookCredentialDTO` and `CreateCustomLLMCredentialDTO`/`UpdateCustomLLMCredentialDTO` doesn't have id and orgId parameters in them.Pigeon
05/05/2025, 7:24 AMllm_credentials_id and wh_credentials_id, orgId is taken from dashboard.:
from vapi import Vapi
async def update_local_assistant_auth(assistant_id, llm_credentials_id, wh_credentials_id):
credentials = {
"credentials": [
{
"provider": "custom-llm",
"apiKey": "",
"id": llm_credentials_id,
"orgId": ORG_ID,
"authenticationPlan": {
"type": "oauth2",
"url": "***",
"clientId": CLIENT_ID,
"clientSecret": CLIENT_SECRET
},
"name": "local"
},
{
"provider": "webhook",
"id": wh_credentials_id,
"orgId": ORG_ID,
"authenticationPlan": {
"type": "oauth2",
"url": "****",
"clientId": CLIENT_ID,
"clientSecret": CLIENT_SECRET,
},
"name": "local"
},
]
}
response = client.assistants.update(id=assistant_id, **credentials)
This works if id and orgId is removedPigeon
05/05/2025, 7:25 AMid and orgId in Update Assistant https://docs.vapi.ai/api-reference/assistants/update#request.body.credentials
Once again I would expect id and orgId to be populated by VAPI when creating credential, because these only exist in main shcema for credentials, not in create or update otherwise create and/or update schemas probably needs id/orgIdPigeon
05/05/2025, 12:46 PMVapi
05/05/2025, 8:37 PMcreate credential endpoint, VAPI generates the id field automatically
- The orgId comes from the authenticated request context
- These fields are managed by the backend
2\. **Inline Credentials in Assistants**:
- When defining credentials directly in an assistant configuration, these aren't stored as separate database entities
- They're used at runtime "as is" for each request
- For OAuth token caching to work, these inline credentials need both id and orgId fields
- Without these fields, the system can't persist tokens between calls
The confusing part for you is validation errors when trying to include these fields in assistant updates because the API validation is designed to prevent users from setting system-managed fields. However, for inline credentials to work with token caching, these fields need to be present at runtime.
This is effectively inconsistency in the system - let me check with the team.
For your's use case with different credentials per assistant(for now only, i know about runtime credentials):
Create separate credentials via the dashboard/API first, then reference them by ID in the assistantVapi
05/05/2025, 8:37 PMPigeon
05/06/2025, 5:44 AMcustomLlm and serverMessages, so it still has same URL for server messages and custom LLM and I need different assistants for different environments.
Anyway, I'll be waiting for updates, thanks again!Vapi
05/10/2025, 3:19 AM