Damian
03/04/2025, 1:44 AMVapi Support Bot
03/04/2025, 1:46 AMbash
curl 'https://api.vapi.ai/call/phone' \
-H 'authorization: Bearer YOUR_API_KEY' \
-H 'content-type: application/json' \
--data-raw '{
"assistantId": "YOUR_ASSISTANT_ID",
"customer": {
"number": "+12345678913"
},
"phoneNumberId": "YOUR_PHONE_NUMBER_ID"
}'
2. **Enable Real-Time Streaming**: After initiating the call, you will receive a listenUrl
. This URL is used to establish a WebSocket connection for real-time audio streaming.
3. **Connect to the WebSocket**: Use the listenUrl
to connect to the WebSocket and start receiving audio data. Here’s a simple example in JavaScript:
javascript
const WebSocket = require('ws');
const ws = new WebSocket("wss://your.listen.url");
ws.on('open', () => {
console.log('WebSocket connection established');
});
ws.on('message', (data) => {
// Send the audio data to your gender-recognition API here
sendToGenderRecognitionAPI(data);
});
ws.on('close', () => {
console.log('WebSocket connection closed');
});
ws.on('error', (error) => {
console.error('WebSocket error:', error);
});
4. **Process Audio Data**: In the message
event of the WebSocket, you can send the received audio data to your gender-recognition API for processing.
5. **Handle Responses**: Make sure to handle any responses from the gender-recognition API as needed.
### Additional Considerations:
- Ensure that your gender-recognition tool can accept real-time audio input and is compatible with the audio format being sent.
- Be mindful of compliance and privacy regulations when handling audio data.Vapi Support Bot
03/04/2025, 1:46 AMDamian
03/04/2025, 1:48 AMKyle Brunker
03/05/2025, 5:25 PMimport WebSocket from 'ws';
import { Server } from 'http';
const wss = new WebSocket.Server({ port: 8080 });
wss.on('connection', (ws) => {
console.log('New connection established');
ws.on('message', async (data, isBinary) => {
if (isBinary) {
// Handle binary audio data
const audioBuffer = Buffer.from(data);
// Send to your gender recognition API
const gender = await yourGenderRecognitionAPI(audioBuffer);
// Send back the result
ws.send(JSON.stringify({
type: 'transcriber-response',
transcription: `Speaker gender: ${gender}`,
channel: 'customer'
}));
} else {
// Handle the initial configuration message
console.log('Received config:', data.toString());
}
});
});
2\. Configure VAPI to use your custom transcriber:
const callConfig = {
transcriber: {
provider: 'custom-transcriber',
server: {
url: 'ws://your-server:8080',
headers: {
'Authorization': 'Bearer your-auth-token'
}
}
}
// ... other call configuration
};
Key points to note:
\- VAPI sends audio in 16-bit PCM format
\- Audio is sent as binary WebSocket frames
\- The sample rate is specified in the initial configuration message
\- You can send back results using the transcriber-response
message type
Let me know if you need any clarification or have questions about specific parts of the implementation!