Babelbeez Headless SDK Demo
Example of using
BabelbeezClient from @babelbeez/sdk
to run a browser-based Voice Agent (text + voice).
Status:
Idle
- When you call
- This demo shows hybrid usage: you can talk, type, or both. Text input uses
connect(), the browser will ask for microphone access.- This demo shows hybrid usage: you can talk, type, or both. Text input uses
sendUserText() on the same live session and may interrupt spoken
responses when necessary.
What this page demonstrates
-
Direct instantiation of
BabelbeezClientfrom@babelbeez/sdkusing a Voice AgentpublicChatbotIdconfigured in the Babelbeez Dashboard. -
Core events from the README in action:
buttonState,transcript,session:start,session:end, pluserrorfor failure states – all wired into a custom UI, without any embed loader or widget. -
Hybrid usage via
connect()(microphone audio) andsendUserText()(typed messages) on the same live session. -
Handoff flow handling using
handoff:show/handoff:hideevents and thehandleHandoffSubmit()/handleHandoffCancel()methods described in the SDK API. - Browser-first behavior: WebRTC + microphone permissions are fully managed by the SDK; your UI only calls the documented methods and responds to documented events.
In your own app, install @babelbeez/sdk, create a
BabelbeezClient with your own publicChatbotId, and
reuse this pattern of connect(), disconnect(),
sendUserText(), and handoff handlers to build a fully custom UI.