Text Chat
Send text messages to your AI character and handle streaming responses.
Sending Text
Use send_text() to send a message after connecting:
client.send_text("What is the weather like today?")
The method emits a text event over the WebSocket. The server processes the message through the AI pipeline and streams the response back via bot_response events.
send_text() raises EstuaryError with code NOT_CONNECTED if the client is not connected. Always call connect() first.
Receiving Responses
Bot responses arrive as a stream of chunks. Each chunk is a BotResponse dataclass:
async def on_response(response):
if response.is_final:
# Full response is ready
print("Complete response:", response.text)
else:
# Partial chunk
print(response.partial, end="", flush=True)
client.on("bot_response", on_response)
BotResponse Fields
| Field | Type | Description |
|---|---|---|
text | str | The full accumulated response text so far |
partial | str | The text content of just this chunk |
is_final | bool | True when the response is complete |
message_id | str | Unique identifier for this response |
chunk_index | int | Sequential index of this chunk (starts at 0) |
is_interjection | bool | True if this is a proactive message, not a reply to user input |
Streaming Pattern
A typical response arrives as multiple events:
bot_response { chunk_index: 0, partial: "The weather", is_final: False, text: "The weather" }
bot_response { chunk_index: 1, partial: " today is", is_final: False, text: "The weather today is" }
bot_response { chunk_index: 2, partial: " sunny and", is_final: False, text: "The weather today is sunny and" }
bot_response { chunk_index: 3, partial: " warm.", is_final: True, text: "The weather today is sunny and warm." }
The text field accumulates across chunks, so on is_final: True it contains the full response.
Text-Only Mode
By default, sending text triggers both a text response (bot_response) and a voice response (bot_voice). To suppress the voice response and receive text only, pass text_only=True:
# Text response only -- no TTS audio generated
client.send_text("Give me a summary of our conversation.", text_only=True)
This is useful for programmatic interactions where voice output is not needed, or to reduce latency and bandwidth.
Interrupting a Response
You can interrupt an in-progress response with interrupt(). This tells the server to stop generating and clears any queued audio:
# Interrupt the current response
client.interrupt()
# Optionally specify which message to interrupt
client.interrupt(response.message_id)
Listen for the server's confirmation:
async def on_interrupt(data):
print("Response interrupted:", data.message_id)
client.on("interrupt", on_interrupt)
Interjections
Sometimes the character sends a message without being prompted -- for example, a greeting when you first connect, or a follow-up question. These are marked with is_interjection:
async def on_response(response):
if response.is_interjection and response.is_final:
print("Character said (unprompted):", response.text)
client.on("bot_response", on_response)
Send and Wait
For simple request-response flows, send_text_and_wait() sends a message and returns the final BotResponse directly — no manual event listener needed:
async def send_text_and_wait(
text: str,
*,
text_only: bool = False,
timeout: float = 20.0,
) -> BotResponse
| Parameter | Type | Default | Description |
|---|---|---|---|
text | str | required | The message text |
text_only | bool | False | If True, suppress voice response |
timeout | float | 20.0 | Max seconds to wait for a final response |
Returns: The final BotResponse (with is_final=True).
Raises: asyncio.TimeoutError if no final response within timeout. EstuaryError with NOT_CONNECTED if not connected.
import asyncio
from estuary_sdk import EstuaryClient, EstuaryConfig
async def main():
async with EstuaryClient(config) as client:
await client.connect()
response = await client.send_text_and_wait("What is the capital of France?")
print(response.text) # "The capital of France is Paris."
# Text-only (no TTS), longer timeout
response = await client.send_text_and_wait(
"Summarize our conversation.",
text_only=True,
timeout=30.0,
)
print(response.text)
The listener is registered before sending and cleaned up automatically whether the call succeeds or times out. This replaces the manual pattern of registering a bot_response listener and waiting for is_final.
Example: Chat Loop
Here is a complete example of a text chat loop using aioconsole:
import asyncio
from estuary_sdk import EstuaryClient, EstuaryConfig
config = EstuaryConfig(
server_url="https://api.estuary-ai.com",
api_key="est_your_api_key",
character_id="your-character-uuid",
player_id="user-123",
)
async def main():
async with EstuaryClient(config) as client:
async def on_response(response):
if response.is_final:
print(f"\nBot: {response.text}\n")
client.on("bot_response", on_response)
await client.connect()
print("Connected! Type a message and press Enter.\n")
loop = asyncio.get_event_loop()
while True:
text = await loop.run_in_executor(None, input, "You: ")
if text.strip():
client.send_text(text.strip())
asyncio.run(main())
Next Steps
- Voice (WebSocket) -- Add voice input and output
- Voice (LiveKit) -- Low-latency voice via WebRTC
- Memory & Knowledge Graph -- Query character memory