@Jean Reinhold
It's great to hear that you've been experimenting with different models to find the best fit for your transcription needs. While VAPI currently supports Deepgram's nova family models and Talkscriber's whisper model, it sounds like you're interested in using OpenAI's Whisper model, which is not directly supported by VAPI.
However, you can retrieve the raw audio from your conversations and then use an external process to transcribe it with OpenAI's Whisper. Here's a general approach you could take:
1. **Capture Raw Audio**: Ensure that your system captures and stores the raw audio from the conversations.
2. **Classify Audio**: Use an audio classification tool or model to identify the important parts of the audio that contain cadastral information.
3. **Transcribe with Whisper**: Once you have the classified audio segments, you can use an external script or service to transcribe them with OpenAI's Whisper model.
4. **Integrate Transcriptions**: After obtaining the transcriptions, integrate the text back into your system for further processing or storage.
Unfortunately, VAPI does not have a built-in function to pass audio to OpenAI's Whisper or to retrieve raw audio for external processing. You would need to implement these steps outside of VAPI.
If you need assistance with integrating these steps into your workflow or if you have any other questions, please feel free to ask for more support.