arash
01/25/2025, 2:29 AMVapi
01/25/2025, 2:31 AM/openai-sse/chat/completions
endpoint with the necessary parameters, such as the prompt and other settings.
Here's a basic example of how you might set up a route in your Flask app to handle this:
python
@app.route('/chat-completion', methods=['POST'])
def chat_completion():
data = request.json
prompt = data.get('prompt')
# Add other parameters like temperature, max_tokens, etc. as needed
response = openai.Completion.create(
engine="davinci",
prompt=prompt,
# Include other parameters here
)
return jsonify(response)
To use this endpoint, you would send a JSON payload with the prompt and any other relevant information to your Flask app's /chat-completion
route.
If you need more detailed instructions or have specific requirements, please let me know, and I'll be happy to assist further!Vapi
01/25/2025, 2:31 AMVapi
01/25/2025, 2:32 AMA. Salem
01/25/2025, 7:53 PMShubham Bajaj
01/25/2025, 9:46 PMShubham Bajaj
01/25/2025, 9:47 PMarash
01/26/2025, 11:20 PMudAIan
01/28/2025, 3:25 AMassistant
message in response that has arguments for making the tool call. The api just makes the call and return the tool_call
result.
However, I should get the assistant
message in response that has arguments first then I can decide to call the API further to invoke the tool call or not.
Also, as I mentioned this is a behavior (not getting assistant
message with arguments) happens sometimes that is on some occasions I have received this message. The un-predictability of API is un-desirable.
Open AI tool calling for refrence: https://platform.openai.com/docs/guides/function-calling#overview. From the diagram the API is sometimes not returning the step (2).udAIan
01/28/2025, 4:57 PMShubham Bajaj
01/29/2025, 7:18 AM