Chat Completions endpoint?
# support
a
In the AMA today, Jordan mentioned that you guys have a chat completion endpoint will return via text the next completion. I don't see that in the API docs, is this available? It would be really helpful for us to see what the model will say next, for our evals.
v
@arash Yes, indeed! The chat completion endpoint that Jordan mentioned is available, and it's designed to return the next completion based on the input you provide. While it might not be immediately visible in the API docs you're looking at, I can certainly help you understand how to use it. If you're looking to implement this in your Python Flask application, you would typically send a POST request to the
/openai-sse/chat/completions
endpoint with the necessary parameters, such as the prompt and other settings. Here's a basic example of how you might set up a route in your Flask app to handle this:
Copy code
python
@app.route('/chat-completion', methods=['POST'])
def chat_completion():
    data = request.json
    prompt = data.get('prompt')
    # Add other parameters like temperature, max_tokens, etc. as needed

    response = openai.Completion.create(
        engine="davinci",
        prompt=prompt,
        # Include other parameters here
    )
    return jsonify(response)
To use this endpoint, you would send a JSON payload with the prompt and any other relevant information to your Flask app's
/chat-completion
route. If you need more detailed instructions or have specific requirements, please let me know, and I'll be happy to assist further!
You can continue this conversation by mentioning me in the message.
Got it. Tagging @User for additional help.
s
Thanks @A. Salem for helping @arash
@arash can i close this ticket now, if you have received right the API Endpoint url you were looking for?
a
Thanks guys! Yep go ahead and close it.
u
@Shubham Bajaj @A. Salem I tried the chat completion [API](https://staging-api.vapi.ai/api-extended#/extended/ChatController_chat) but I'm facing issue with tool calling. When I call the chat api with a user message that invokes tool_call. I sometimes do not get the
assistant
message in response that has arguments for making the tool call. The api just makes the call and return the
tool_call
result. However, I should get the
assistant
message in response that has arguments first then I can decide to call the API further to invoke the tool call or not. Also, as I mentioned this is a behavior (not getting
assistant
message with arguments) happens sometimes that is on some occasions I have received this message. The un-predictability of API is un-desirable. Open AI tool calling for refrence: https://platform.openai.com/docs/guides/function-calling#overview. From the diagram the API is sometimes not returning the step (2).
Also, is there any way that the API returns next response in Vapi schema instead of open-ai schema?
s
@udAIan can you create a new #1211483291191083018 ticket for this, and add the steps to reproduce it so i can take a look?
2 Views