At the start of every call, VoxCore fetches the bot configuration from VoxBridge. The config defines the system prompt, service providers, voice settings, tools, and post-call behavior.
Config fetch
GET {VOXBRIDGE_CONFIG_URL}/{bot_id}?caller_id={caller}&stream_id={call_id}
X-VoxCore-Secret: {secret}
VoxBridge returns 200 with the config JSON, 503 if the bot is outside active hours, or 404 if the bot doesn’t exist.
Schema reference
Top-level fields
| Field | Type | Default | Description |
|---|
session_id | string | (required) | Unique session identifier |
webhook_url | string | (required) | URL for results webhook |
system_prompt | string | (required) | LLM system prompt |
opening_message | string | (required) | First message bot speaks |
timezone | string | "Asia/Kolkata" | Timezone for timestamps |
min_words_interruption | int | 3 | Min words before customer can interrupt bot |
max_call_duration_seconds | int | 600 | Max call length before auto-hangup |
voicemail_message | string | "" | Message to speak when voicemail is detected |
pre_transfer_message | string | "" | Message to speak before call transfer |
tools | list[object] | [] | OpenAI-format tool definitions |
transfer_numbers | dict | {} | Map of name → phone number for transfers |
post_call_analysis_prompt | string | null | Prompt for post-call LLM analysis |
qc_prompt | string | null | Prompt for QC analysis |
auto_dispositions | dict | null | Per-bot disposition overrides |
STT config (stt)
| Field | Type | Default | Description |
|---|
provider | string | (required) | "deepgram" or "soniox" |
api_key | string | "" | Provider API key |
model | string | "stt-rt-v4" | Model identifier |
language | string | "en" | Language code |
extra | dict | {} | Provider-specific: endpointing, smart_format, punctuate, language_hints |
LLM config (llm)
| Field | Type | Default | Description |
|---|
provider | string | (required) | "openai", "google", or "google_vertex" |
api_key | string | "" | Provider API key |
model | string | (required) | Model identifier (e.g., "gemini-2.5-flash") |
temperature | float | 0.7 | Sampling temperature |
max_tokens | int | 256 | Max output tokens per turn |
extra | dict | {} | Provider-specific: top_p, top_k, frequency_penalty, presence_penalty, project_id (Vertex), location (Vertex) |
Google Vertex AI requires project_id in llm.extra. Calls fail at startup without it. location defaults to us-east4 if omitted.
TTS config (tts)
| Field | Type | Default | Description |
|---|
provider | string | (required) | "elevenlabs", "sarvam", or "tarang" |
api_key | string | "" | Provider API key |
voice_id | string | (required) | Voice identifier |
model | string | null | Model identifier (e.g., "eleven_flash_v2_5") |
language | string | "en" | Language code |
extra | dict | {} | Provider-specific: speed, stability, similarity_boost, pitch, loudness, pace |
VAD config (vad)
| Field | Type | Default | Description |
|---|
confidence | float | 0.8 | VAD confidence threshold |
start_secs | float | 0.3 | Seconds of speech to trigger start |
stop_secs | float | 0.6 | Seconds of silence to trigger stop |
min_volume | float | 0.6 | Minimum volume threshold |
Re-engagement config (re_engagement)
| Field | Type | Default | Description |
|---|
messages | list[string] | (required) | Prompts to speak when customer is silent |
gap_seconds | int | list[int] | 5 | Seconds before re-engagement. List = [first, subsequent]. |
max_retries | int | 2 | Max re-engagement attempts before RNR hangup |
MinIO config (minio)
| Field | Type | Default | Description |
|---|
endpoint | string | "localhost:9000" | S3-compatible endpoint |
access_key | string | "minioadmin" | Access key |
secret_key | string | "minioadmin" | Secret key |
bucket | string | "recordings" | Bucket name |
secure | bool | false | Use HTTPS |
When minio is null in config, the worker falls back to environment variable MinIO settings. Set it explicitly in bot config to use a different storage destination per bot.