Problem in tool calling in custom llm
# support
t
I am using a fine-tuned version of an llm that returns function calls as xmls, I am parsing it in my server and sending it with the openAI format. This is what it shows in call logs. What to do? [WARN] Model request failed (attempt #1, reason: (toolType not set))
v
@Tanush It seems like there might be an issue with how the function calls are being formatted or sent from your server to the OpenAI API. To address this warning, you'll want to ensure that the function calls are properly formatted in the JSON payload and that the
toolType
is correctly set to indicate the type of tool you're using. Here's what you can do: 1. Review the XML to JSON conversion process on your server to ensure that the function calls are being translated into the correct JSON structure expected by the OpenAI API. 2. Make sure that the
toolType
field is included in your JSON payload and that it is set to the appropriate value that corresponds to the tool you're using. 3. If you're unsure about the correct format or the value that should be set for
toolType
, please refer to the OpenAI API documentation for guidance on how to structure your API requests. If you've checked these points and the issue persists, or if you need further assistance with the specific implementation details, please ask support for more info on this.
You can continue this conversation by mentioning me in the message.
t
Still not solved
@User
s
can you share the call id.
t
@Shubham Bajaj 4e259fe6-bf95-4c7c-bf06-329dbce64f6d
s
looks like your trying to use
endCall
tool and type couldn't match. instead of creating the function simply add this:
Copy code
json
tools: [{type: "endCall"}]
Let me know how it goes.
4 Views