Skip to main content

Installation and Setup

Learn how to install, set up, and run the Estuary Lens Studio SDK to integrate conversational AI characters directly within your Snap Spectacles projects.

Introduction

The Estuary Lens Studio SDK allows you to seamlessly integrate Estuary's advanced conversational AI characters into your Lens Studio projects. This guide walks you through the complete setup process to get your first AI-powered Lens running on Spectacles.


Prerequisites

Before you begin, ensure you have:

RequirementVersion/Details
Lens Studio5.15.3
Estuary AccountSign up at app.estuary-ai.com
API KeyGenerated from your Estuary dashboard
Character IDThe UUID of your AI character
Snap SpectaclesRequired for testing

Installation Guide

Follow these steps to install and configure the Estuary SDK.

Download the SDK

Download the latest Estuary SDK package:

  • Option A: Download from GitHub releases
  • Option B: Use the .lspkg package file directly

The SDK package includes:

  • Core SDK source files (src/)
  • Example scripts (Examples/)
  • License and documentation

Create a New Lens Studio Project

  1. Open Lens Studio
  2. Click New Project
  3. Choose a template (recommended: Blank Project for Spectacles)
  4. Name your project and click Create
Spectacles Template

For the best experience, use a Spectacles-compatible template or ensure your project is configured for Spectacles deployment.

Import the SDK Package

  1. Navigate to Assets Panel → Right-click → Import
  2. Select the Estuary SDK folder or .lspkg file
  3. The SDK will be imported into your Assets

Your project structure should look like:

Assets/
├── estuary-lens-studio-sdk/
│ ├── src/
│ │ ├── Components/
│ │ │ ├── EstuaryActionManager.ts
│ │ │ ├── EstuaryCharacter.ts
│ │ │ ├── EstuaryCredentials.ts
│ │ │ ├── EstuaryManager.ts
│ │ │ ├── EstuaryMicrophone.ts
│ │ │ └── VisionIntentDetector.ts
│ │ ├── Core/
│ │ │ ├── EstuaryClient.ts
│ │ │ ├── EstuaryConfig.ts
│ │ │ └── EstuaryEvents.ts
│ │ ├── Models/
│ │ └── Utilities/
│ └── Examples/
│ ├── EstuaryVoiceConnection.ts
│ ├── EstuaryCamera.ts
│ └── ExampleCharacterActions.ts
└── ... (your other assets)

Import Required Dependencies

The Estuary SDK requires additional Lens Studio packages:

RemoteServiceGateway.lspkg

This package provides essential components:

  • MicrophoneRecorder - For capturing microphone audio
  • DynamicAudioOutput - For playing TTS audio responses

To import:

  1. Go to Asset Library in Lens Studio
  2. Search for "RemoteServiceGateway"
  3. Click Import to add it to your project

Project Setup

Setting Up Your Credentials

  1. Create a new SceneObject (e.g., name it "Estuary Credentials")
  2. Add the EstuaryCredentials script component
  3. Configure the following fields in the Inspector:
FieldDescription
API KeyYour Estuary API key from the dashboard
Character IDThe UUID of your AI character
Debug ModeEnable for detailed logging (recommended during development)
// The EstuaryCredentials component handles:
// - API key storage
// - Character ID configuration
// - Automatic User ID generation and persistence
// - Debug mode toggling

Setting Up the Connection

  1. Create a new SceneObject (e.g., "Estuary Connection")
  2. Add the EstuaryVoiceConnection script component (from Examples/EstuaryVoiceConnection.ts)
  3. Configure the inputs in the Inspector:
InputDescription
Credentials ObjectDrag your EstuaryCredentials SceneObject here
Internet ModuleConnect the InternetModule from your scene
Microphone Recorder ObjectSceneObject with MicrophoneRecorder script
Dynamic Audio Output ObjectSceneObject with DynamicAudioOutput script
Enable Vision Intent DetectionEnable natural language camera activation (default: true)
Vision Confidence ThresholdConfidence threshold for camera trigger (default: 0.7)

Setting Up Audio Components

Microphone Input

  1. Create a SceneObject (e.g., "Microphone")
  2. Add the MicrophoneRecorder script from RemoteServiceGateway
  3. Add a Microphone Audio asset if required

Audio Output

  1. Create a SceneObject (e.g., "Audio Output")
  2. Add the DynamicAudioOutput script
  3. Add an Audio Component to the same object
  4. Create an Audio Track asset and assign it to DynamicAudioOutput

Adding the InternetModule

  1. In the Scene Hierarchy, right-click and add a resource
  2. Search for Internet Module and add it
  3. Connect it to your EstuaryVoiceConnection component

Setting Up Camera (Optional)

To enable AI vision capabilities (camera capture for image analysis):

  1. Create a new SceneObject (e.g., "Estuary Camera")
  2. Add the EstuaryCamera script component (from Examples/EstuaryCamera.ts)
  3. Configure the inputs in the Inspector:
InputDescription
Debug ModeEnable debug logging (default: true)
Capture ResolutionImage resolution in pixels (default: 512)
Enable Vision AcknowledgmentAI says acknowledgment before analyzing (default: true)

The camera integrates automatically with EstuaryVoiceConnection's vision intent detection.


Complete Scene Setup

Here's what your final scene hierarchy should look like:

Scene
├── Estuary Credentials [EstuaryCredentials script]
├── Estuary Connection [EstuaryVoiceConnection script]
├── Estuary Camera [EstuaryCamera script] (optional)
├── Microphone [MicrophoneRecorder script]
├── Audio Output [DynamicAudioOutput script + AudioComponent]
├── Internet Module [InternetModule resource]
└── ... (your other scene objects)

Testing Your Setup

In Lens Studio Desktop Preview

Simulator Limitation

Lens Studio does not support Acoustic Echo Cancellation (AEC), so characters may hear and respond to themselves. We highly recommend testing conversations with headphones for the best experience.

Simulator Limitation

The Camera Module only functions when deployed to Spectacles hardware due to Lens Studio API limitations.

  1. Enable Spectacles in your project
    In Lens Studio, open ProjectProject Info and ensure Made For Spectacles is checked. This enables Spectacles connectivity in the preview flow.

On Spectacles Hardware

  1. Connect Spectacles to Lens Studio (choose one):

    • Wired (recommended): Plug Spectacles into your computer with a USB-C cable. In the Spectacles mobile app, go to Developer SettingsLens Development and turn on Enable Wired Connectivity (one-time). Spectacles will connect automatically when plugged in.
    • Wireless: Sign into Lens Studio with the same Snap account paired to your Spectacles. Put Spectacles and your computer on the same Wi‑Fi network (no client/AP isolation). Keep Spectacles awake with the display on.
  2. Send the Lens to Spectacles
    In the Lens Studio toolbar, click Preview Lens. The first time, Lens Studio connects to Spectacles; after that, it sends your Lens. The Lens appears in the Draft folder on Spectacles (you can enable Send On Project Save in Preferences for faster iteration).

  3. Put on your Spectacles and start talking!

When successfully connected, you'll see logs like:

[EstuaryCredentials] Credentials configured successfully
[SimpleAutoConnect] MicrophoneRecorder configured successfully
[SimpleAutoConnect] DynamicAudioOutput configured (16000Hz)
[SimpleAutoConnect] Vision intent detection enabled
===========================================
Connected! Starting mic stream...
Session: abc123-def456-...
===========================================

Troubleshooting

Common Issues

IssueSolution
"InternetModule is required"Ensure InternetModule is added to your scene and connected
"No EstuaryCredentials found"Create and connect the EstuaryCredentials SceneObject
"MicrophoneRecorder not found"Import RemoteServiceGateway.lspkg and add MicrophoneRecorder component
Connection closes immediatelyCheck your API key and Character ID are correct
No audio playbackEnsure DynamicAudioOutput has an AudioComponent and AudioTrack

Debug Mode

Enable debug mode in EstuaryCredentials for verbose logging:

// In EstuaryCredentials Inspector:
debugMode: true

This will log:

  • Connection state changes
  • Audio streaming statistics
  • Bot responses and STT transcripts
  • Action triggers

Next Steps

Now that your SDK is installed and configured:

  1. Voice Connection Guide - Learn the details of voice input/output
  2. User Management - Understand User ID persistence
  3. Action System - Trigger scene actions from AI responses
  4. Camera Module - Enable AI vision capabilities
  5. API Reference - Detailed component documentation