Passing User (End User ID) for the Custom llm endp...
# support
v
Hey Team, I'm having an issue where OpenAI requests fail about 20% of the time because it thinks some words aren't safe. I'm trying to pass the user ID to OpenAI (custom llm with OpenAI URL), but when I include it in the model object, I get this error:
model.property user should not exist
I'm also trying to use
profanity_filter
on the
transcriber
(deepgram) property to make sure no bad words get passed to the model, but that throws this error:
transcriber.property profanity_filter should not exist
Can you help me with this?
v
@Vijay Hello! It seems like you're encountering issues with custom properties not being recognized by the systems you're working with. Let's address each issue separately: 1. For the OpenAI request issue, it's important to ensure that you're passing the user ID in a way that's compatible with the OpenAI API. Custom properties like
user
are not standard and would not be recognized. Instead, you should pass the user ID using the appropriate context or metadata fields provided by the API, if available. If you need to update your model configuration to remove the
user
property, please let me know, and I can assist you with that. 2. Regarding the
profanity_filter
on the
transcriber
property, it sounds like the
transcriber
service you're using (Deepgram) might not support a
profanity_filter
property directly. You may need to implement a separate profanity filtering step after you receive the transcription from Deepgram, before passing it to your model. If you need further assistance with updating your model configuration or implementing a separate profanity filtering step, please provide more details or ask for support, and I'll be glad to help!
You can continue this conversation by mentioning me in the message.
v
1. The metadata data is no use, I have other fields on the metadata that I use them internally passing the entire metadata using destructured is failing the OpenAI request. 2. any help on this? if I have to place a filtering step before passing to the Open AI API when using custom llm with open ai endpoint, its not possible. So, do I have to implement a proxy request to gain control over what I am sending to Open AI?
@User any help?
j
What use case? Profanity shouldn't be an issue. If you leave a call I'd it will help the support team
v
Yep, good call. let me attach that here
the one that failed: 74544dc7-55eb-4eee-93c2-03e44b92ec1f it stopped right after the user message: "Yes. And also, asiago, for duchess"
s
Copy code
🔵 19:25:56:516 Couldn't Complete Completion Request (#1, provider: custom-llm, model: gpt-4o-mini, region: undefined, credential: true) Error: {
  "error": {
    "message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error.",
    "type": "server_error",
    "param": null,
    "code": null
  },
  "code": null,
  "param": null,
  "type": "server_error",
  "message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error.",
  "stack": "Error: The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error.\n    at Stream.iterator (/app/node_modules/openai/streaming.js:39:35)\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async handleOpenAIStream (/app/dist/libs/core/src/util/openAIUtil.js:80:22)\n    at async CustomLLMRequest.request (/app/dist/libs/core/src/components/liveCallPipeline/streams/completions/requests/customLlmRequest.js:49:9)"
}
you have to check the logs on your end and try to debug on your ow, can't help much here.
v
I am using custom llm but pointed toward the open ai endpoint
I am guessing its the profanity filter as per the last user message
is there a way in Vapi to prevent or filter out the words thats coming from transcriber?
Deepgram has an option on their end to fix this but Vapi is not allowing me to pass that value. its profanity_filter
s
hmm some words your generating is being blocked by openai, as of now we don't allow using filter parameter on transcriber.
so you have to make sure such words are not genertaed.
v
so I think my last option is to route through a proxy endpoint on my server that routes the OpeAI requests through my server and then filter them? This could add latency but looks like the only option.
@User I am not sure now if its related to the profane words, This call dropped too, let me know. The previous one is using the gpt-40-mini but this one is fine tuned model.
db905ebc-4a61-4d7d-83ba-84aaa3cb9e53
password shared for dump over the dm.
You can see in the logs request from llm, not by Vapi.
v
Ok let me check
e
@Vijay how are you getting your fine tuned openAI model to work? I swear I’m doing everything right but when I call my phone the assistant says the opening message and then as soon as I respond it ends the call.. and my ended call reason is pipeline error custom LLM failed but there’s no other explanation.. and it works in the vapi web UI.. but not outside of it
s
create a support ticket @Edash
2 Views