Cast is an autopilot layer for post-sales revenue that scales retention and expansion without adding headcount.
No—Cast doesn’t replace systems of record. It uses your CSP/CRM (and other sources) as inputs, then writes engagement and outcomes back.
A CSP is where teams track health, playbooks, tasks, and internal workflows. Cast is not a CSP.
Cast reads from your CSP/CRM (and other sources) to generate and deliver customer-facing experiences—then writes engagement and outcomes back so systems of record stay current.
The goal is no rip-and-replace, while adding an autopilot layer that scales influence and coverage.
It turns internal signals into consistent stakeholder influence—at scale.
Most CX/CS orgs have data and playbooks. The limiting factor is execution: getting the right narrative to the right stakeholders (especially exec sponsors) at the right time, without creating more meetings and more manual work.
Cast fills the gap between “we know what’s happening” and “we moved the account.”
It operationalizes motions across onboarding, adoption, renewals, expansion, feedback, and support deflection—while preserving governance and escalation paths.
It reduces prep and reactive work, increases coverage, and creates better moments for human judgment.
CSMs: Automates recurring reviews, stakeholder updates, and Q&A so time shifts from slide-building to proactive risk/relationship work.
Onboarding: Drives milestone completion and time-to-value with consistent guidance, reminders, and “what’s next” interventions.
AMs: Packages value proof, benchmarking, and expansion signals into executive-ready narratives and next steps—without relying on constant manual orchestration.
It’s proactive and orchestrated—using approachable business reviews as a scalable growth engine, not just assisting someone inside a UI.
Copilots help a person do work faster (draft emails, summarize calls, propose next steps).
Chatbots respond when someone asks a question.
“Digital CS” often means generic campaigns and one-size messaging.
Cast Autopilot orchestrates a system that delivers approachable, customer-ready business reviews and guidance on a cadence—in-app for active users and in-inbox for executives and inactive users—so you can influence stakeholders who don’t log in.
It routes questions safely and escalates to specialized agents and humans when judgment is needed—turning post-sales into a repeatable growth engine without prompt engineering or manual orchestration.
Those are video creation tools. Cast is a data-driven customer communication system.
Synthesia/HeyGen: Great for generic avatar videos—but they render static pixels. If data changes, you re-render. They don’t natively read your CRM to generate charts live.
Loom: Great for asynchronous human recording—but requires a person to record every time.
Cast: Connects to live data to generate thousands of personalized experiences instantly. It visualizes data, applies logic (“if churn risk > high, show slide X”), supports two-way interaction (AMA), and can run without an avatar (identity-neutral mode) if preferred.
B2B teams that need more coverage and more executive/decision-maker influence, while growing accounts without proportionally adding headcount.
Cast is most valuable when one or more of the following are true:
When deployment constraints block it, customer-facing automation isn’t acceptable, or your operating model is a tiny set of fully staffed $10M+ accounts.
Deployment constraint: On-prem/private deployment is required and telemetry/data can’t leave the environment.
Cultural constraint: Leadership won’t allow automation to engage customers (regardless of guardrails).
Operating model mismatch: A small set of very large accounts already receive full senior coverage (senior CSM + senior AM) and stakeholder influence is already saturated.
Business model mismatch: Primarily non-recurring / bespoke projects with frequent re-negotiation and no repeatable lifecycle motion.
Governance constraint: Security/legal won’t approve the data access required to run customer-facing experiences.
Ownership constraint: Agent initiatives are owned by Product/Engineering, limiting CS/CX’s ability to buy and deploy a customer-facing system.
Prototype bias: The org prefers an internal prototype over a production-grade operating system—so it stays in “experiments” longer than it stays in outcomes.
Pricing is based on the volume of customer accounts you serve—not the number of internal seats.
Per-seat pricing punishes efficiency. Because Cast is designed to scale coverage without adding headcount, pricing is based on the number of customer accounts (companies) you serve.
This lets you grant access to your internal team (CSMs, AMs, executives, RevOps) without license penalties.
Reaching multiple contacts per account drives outcomes, so Cast includes multiple contacts per account in the base price—encouraging broad stakeholder reach rather than limiting it.
Use a future-proof architecture: stable data contracts, governed tool access, and model flexibility—so your operating model stays stable while the model layer evolves.
The practical way to avoid rework is to decouple your customer-facing motions (onboarding, business reviews, renewals, expansion, support) from the underlying model (which will keep changing).
That requires stable interfaces to data/tools, controlled policies, and consistent behavior—so you can swap or upgrade models without rewriting integrations or re-authoring everything.
Cast is built around open, composable building blocks designed for that future-proofing:
Net: you can evolve the model(s) and add built (in-house) and bought (vendor) agents over time without rebuilding core integrations, governance, or operating workflows.
Pre-trained (2.2M minutes) → Training (connect + Continuous Data Hygiene) → Post-training (brand + persona + context) → Generation (approachable business reviews + recommendations + frictionless actions).
Pre-trained
Cast starts with agents pre-trained on 2.2 million minutes of real B2B customer conversations, so interactions feel useful on day one.
Training
Cast connects to your systems of record and signals (CRM, CSP, warehouse, support, product usage, billing) through connectors and APIs—then runs Continuous Data Hygiene that transforms, validates, masks, and enriches data. This is not a one-time cleanup project. It’s an always-on pipeline that prevents messy inputs from becoming customer-facing mistakes—today and in the future.
When data is missing, Cast can still produce a credible story and next steps:
Post-training
Cast aligns the experience to your brand voice and to the audience consuming it (executives, admins, practitioners), adapting by product, segment, role, and lifecycle moment—so it feels authored and relevant, not templated.
Generation
Cast generates approachable business reviews, briefs, follow-ups, and guidance—and delivers them in-app for active users and in-inbox for executives and inactive users. Each experience includes recommendations and makes actions frictionless (book time, open a ticket, route to the right owner/agent, run a workflow, or escalate to a human when judgment is required). Engagement and outcomes can be written back to your CRM/CSP so the system of record stays current.
An AI CSM creates a personalized presentation for decision-makers at every account (and for partners and internal teams), then presents it like a human would—explaining visually and handling real-time interruptions and questions.
Stakeholders don’t just receive a dashboard or a deck. They receive a guided narrative—what changed, what matters, and what to do next—built from their real account data.
During the experience, people can interrupt, ask questions, request clarification, or jump to a topic, and the presenter adapts in real time by pulling supporting content, explaining concepts visually, and summarizing the next best actions needed to move decisions forward.
No—it’s a live, interactive web experience (URL), not a static video file. It can be emailed or embedded in your app with minimal code.
A video file is outdated the moment it renders—it can’t update data, accept clicks, or answer questions. Cast generates a live presentation personalized for every contact at every account, partner, and executive—accessible from a persistent link.
When a stakeholder clicks the link:
Multiple specialized agents work together under a governed system—so each job runs reliably instead of one general bot doing everything.
Instead of one general assistant trying to do everything, Cast divides responsibilities across purpose-built agents for lifecycle guidance, renewal risk, expansion discovery, feedback collection, and support deflection.
The AI Presentation Agent coordinates these agents—and can route to the right human when judgment is required—so the motion is predictable, auditable, and measurable, not ad hoc.
Each handoff includes a summary + action request (with supporting context) so the next agent or human doesn’t have to start over.
In practice, a context-preserving handoff means the current agent packages a clear handoff bundle and hands it to the next agent (or a human). That bundle includes:
This prevents “start over” conversations, reduces customer repetition, avoids internal re-triage, and makes escalations faster and safer.
Wherever stakeholders engage—in-app for active users (weekly or on-demand), in-inbox for executives (monthly), and in-inbox for CFO/Finance (quarterly).
Cast meets each stakeholder where they already operate—and on a cadence that matches how they consume information:
Goal: right channel, right depth, right cadence—without forcing another portal.
Not necessarily—unlike a CSP, CSMs don’t have to live in Cast to get value. CS Ops and analysts may log in for analytics.
Cast Autopilot runs customer-facing motions automatically and writes engagement/outcomes back to CRM/CSP.
Many CSMs can stay in their day-to-day tools, while a smaller set of admins/operators (often CS Ops or RevOps) log into Cast to manage configuration, governance, and analytics—including adoption and engagement trends, stakeholder reach, content performance, and the impact of business reviews on renewals and expansion.
No—many experiences are delivered in your app (nothing new) or via email without a new portal login.
Embedded in your app: customers use your existing authentication.
In-inbox delivery: executives and inactive stakeholders can consume briefings without adopting a new portal.
Customer Center: if you use an always-on hub (history, ROI, action plans), access can be authenticated and controlled based on your security and governance preferences.
Yes—especially for executive consumption via brief formats and mobile-friendly views.
Executives often consume updates on a phone between meetings. Mobile-friendly experiences, concise summaries, and clear next actions matter more than complex navigation.
Both—customers can type or talk, and the experience adapts to how each stakeholder prefers to engage.
Some stakeholders want fast typed questions; others prefer voice when reviewing live. In Cast usage patterns:
Voice interactions also tend to be multi-question conversations. Separately, Cast sees ~11.8× more questions when the agent can continue answering in-session versus transferring the user to a human—supporting the goal of resolving more in the moment while preserving escalation paths when judgment is required.
Sometimes—based on enterprise governance and customer preference.
Some orgs use SMS/chat for time-sensitive nudges; others restrict them. Channel support is often less about capability and more about policy, consent, and brand standards.
In Cast, a presentation is AI-presented and narrated; a Customer Center is a personalized microsite for self-serve reading and exploration.
Both are generated, personalized by account and recipient role, include visual slide content, and support Ask Me Anything (AMA).
Presentation (AI-presented): narrated like a presenter, interruptible, adapts live to questions.
Customer Center (personalized microsite): built for self-serve. It combines:
Customization is look/feel and tone (brand palette, logos, narrative style). Personalization is who sees what, when, and why.
Brand customization covers visual identity and narrative tone—logo placement, colors/palette, templates, typography/styling rules, and how the narrative sounds so the experience feels like your brand.
Content personalization controls relevance and logic—what metrics are emphasized, which recommendations appear, which stakeholders receive what, and cadence—based on segment, lifecycle stage, entitlements, and behavior.
The experience automatically updates as the customer journey changes; what stays stable is the governed logic that decides what to show, when to show it, and to whom—accessed via a perma URL (think: always the latest version).
Cast also generates a fixed version—the URL tied to a specific generation campaign—as a point-in-time snapshot for auditability and alignment.
Yes—CRMs are core sources for account, contact, renewal, and commercial context.
CRM data anchors the customer record: stakeholders, what they bought, renewal dates, and commercial history.
That context is essential both for targeting delivery and logging engagement back into the system the business already uses.
The systems you already run—CRM, CSP, data warehouse, product usage, support, billing, and VoC—connected via 60+ high-performance native connectors plus a universal REST API.
Most organizations already have the signals needed to improve renewal and expansion outcomes—they’re just spread across tools.
Cast connects the sources that matter for your motion, maps them to accounts/entitlements/stakeholders, and uses them to drive consistent customer-facing influence.
Cast supports:
Yes—for pilots or limited scope.
Many teams start with simpler inputs to prove value quickly, then graduate to live integrations.
Starting simple reduces time-to-first-value and helps validate which outputs stakeholders actually respond to.
Cast validates, masks, and corrects data before it becomes customer-facing using Continuous Data Hygiene (rules + AI).
Enterprises rarely have perfect “golden records.” The key requirement is that customer-facing experiences must not expose clearly incorrect or sensitive fields.
Cast applies Continuous Data Hygiene to validate and transform inputs, enforce masking rules, and suppress or flag questionable values so they don’t appear customer-facing without review.
The hygiene pipeline combines:
Net: you can start safely—even before a warehouse cleanup program is complete—because bad or sensitive data is corrected, masked, or withheld before it reaches customers.
No—partial data is enough to start. Missing data changes how the story is told, not whether the system is useful.
Missing data doesn’t mean you stop doing business or stop influencing outcomes.
Cast can still deliver a useful narrative by:
Net: you can move forward now, while making gaps visible and actionable instead of blocking progress.
Yes—Cast can use multiple datasets/instances with clear mapping and governance boundaries.
Large orgs often run multiple CRMs, regional datasets, and product lines (including multiple instances of the same system).
Cast can operate across them, but it requires deliberate identity mapping (accounts/contacts), entitlement definitions, and policy boundaries so each stakeholder only sees what they should—while still producing a coherent narrative across units where appropriate.
Yes—with strict allowlists and access controls.
Unstructured sources are valuable for support deflection and “how-to” guidance, but they must be governed.
The right approach is to allow only approved sources, apply role-based access, and ensure answers stay grounded—especially for anything that could become a liability.
Cast uses a confidence protocol tied to grounded sources, plus automatic escalation when confidence is low.
Preventing “made up” answers requires:
High-confidence answers are delivered directly. Medium-confidence answers include transparent caveats and an easy path to verify. Low-confidence cases escalate to a human with a full context package so nobody has to start over.
Policy controls, permissions, source allowlists, and auditability.
Guardrails include role-based access, data masking, source allowlists, prohibited-topic policies, and logging of what data was accessed and what was generated.
The goal is customer-facing automation that is observable and controllable—so governance, security, and customer trust are preserved.
Yes—CSMs can review high-stakes accounts, while Ops teams audit campaigns at scale.
Cast supports two review workflows:
Strategic (high-touch): A CSM can preview a specific presentation, edit narrative text, and override a data point if the system of record is outdated—so the “money slides” are perfect before delivery.
Scale (tech-touch): Reviewing 50,000 items one-by-one is impossible. Teams use a tabular “data grid” style view to spot-check logic, scan for anomalies (missing values, weird outliers), and approve campaigns in bulk.
Cast uses A2H smart routing: it selects the right human based on ownership + tiering + urgency—and includes a context bundle so nobody has to start over.
Routing uses your operating rules, including:
Each escalation includes a context package:
Net: A2H makes escalation governed and fast—minimizing re-triage while protecting the customer experience.
No—it’s designed to sound on-brand and authored, not templated.
Cast is built around “authored, not automated”:
Net: it reads like a well-prepared human wrote it—at scale—without turning your team into prompt engineers.
Not just an LLM wrapper—Cast uses hybrid agent design (rules-based + ML + LLM) so customer-facing work is reliable and governable.
Deterministic work matters (calculations, thresholds, routing, permissions, policy).
LLMs shine for language, summarization, synthesis, and interactive Q&A.
Cast combines approaches so outputs stay accurate, actions stay governed, and conversations stay natural.
Cast is model-flexible and uses multiple providers. Deployments commonly use OpenAI (GPT) and/or Anthropic, and can also use Google (Gemini)—with separate providers for translation and voice as needed.
More:
https://school.cast.app/security-docs/ai-solution.html
https://school.cast.app/security-docs/subprocessors.html
Cast is designed so the operating model stays stable while model vendors evolve. Concretely:
Note: AI features are generally always on in production deployments (with rare exceptions for specific customers).
You can use your own OpenAI key and route through proxies/custom integrations. ElevenLabs also supports a base URL so it can point to an internal proxy or custom LLM endpoint.
Enterprises often require centralized control over model access and network egress. Cast supports:
https://school.cast.app/security-docs/ai-solution.html
No—Cast is designed so customer-facing outputs are generated without CSMs doing manual prompt work (or copying customer data into public chat tools).
If every CSM has to learn promptcraft, it won’t scale and it won’t be governable.
Cast is built so prompts, templates, and playbooks are generated and governed centrally, while CSMs focus on relationship and judgment.
This also reduces data-leak risk. Many AI tools rely on “data masking” (replacing names with tokens before sending prompts), but Cast’s own experiments show masked prompts can often be reverse-engineered from finite customer lists—so prompt-masking alone is not a safe operating model for customer-facing work.
https://cast.app/llm-data-masking-does-not-work
Engagement, action records, and feedback are written back to systems of record (typically CRM, optionally CSP) and are also available in Cast analytics—exportable via download and accessible via Cast Analytics API.
Engagement and outcomes are valuable across departments. Writing them back ensures Sales, Marketing, Success, Services, and RevOps share the same view of who engaged, what was delivered, what questions were asked, and what actions were taken—without creating another silo.
Cast also provides analytics for deeper reporting, with exports and API access for BI and workflows.
Yes—Cast provides identifiable analytics (who watched, how long, what they explored, and what they shared).
Because Cast uses unique links (and/or can integrate with your app’s authentication), it can track specific engagement:
Defense-in-depth: encryption at rest + in transit, least-privilege access with MFA, and documented DR/uptime targets—backed by published security policies (and SOC 2 Type II / SOC 3 listed in the security docs hub).
A buyer-grade answer typically breaks into:
Avoid treating “prompt masking” as the primary control. Cast’s experiments show masked prompts can often be reverse-engineered from finite customer lists—so Cast is designed to reduce reliance on masking alone as a security strategy.
https://cast.app/llm-data-masking-does-not-work
No—Cast does not use one customer’s data to train systems that benefit other customers, and the AI services used are selected so submitted data is not used to train their models.
Enterprise deployments require that customer data serves that customer’s environment, not generalized model training.
Benchmarking can still exist without “training on your data”: Cast can benchmark accounts within your organization (e.g., comparing one account to peer accounts in the same segment/product/region) while keeping benchmark data private to your tenant—consistent with a “no cross-customer insights” principle.
Yes—Cast Designer supports SAML SSO with Okta, Microsoft Entra ID (Azure AD), Google Workspace, and any Generic SAML provider.
SAML 2.0 SSO for admin/operator access to Cast Designer.
Supported IdP types: Okta, Microsoft Entra ID (Azure AD), Google Workspace, Generic SAML.
Recommended method: upload/paste IdP Metadata XML (fastest + most reliable).
Manual fallback: Entity ID, SSO URL, X.509 certificate (optional logout URL).
Identity matching: email-based (NameID / required attribute: email).
Operational detail: users must be invited in Cast Designer and assigned in the IdP.
https://school.cast.app/sso-setup-guide.html
Yes—SSO enforcement is optional. When enabled, password login is disabled, so test with an admin first and keep multiple admins.
Start with SSO optional, validate assignments and access, then enable “Require SSO” once stable.
To prevent lockouts: test before enforcing, ensure admins exist in the IdP, keep multiple admins.
If something goes wrong, enforcement can be disabled by an admin; support can assist with recovery per docs.
Under ~4 weeks for a first rollout (weeks—not months), assuming normal access and a focused scope.
A practical fast path:
Typical business-user commitment: ~2–3 hours/week for the first 4 weeks.
A first experience that’s branded, data-driven, reviewed, and safe—plus a feedback loop to improve continuously.
Secure access + a few key mapping decisions + light implementation for embedding/branding.
Yes—recommended.
Start narrow (one segment/region/product and one high-value experience), prove safety/governance + stakeholder engagement + measurable impact, then scale as a repeatable rollout motion.
Yes—partners and indirect customers can receive the same “approachable business reviews” and lifecycle motions, with strict visibility boundaries.
Partner ecosystems add a second front: influencing partners who influence end customers. Cast supports:
By enforcing data boundaries at the account, partner, and role level—explicit and auditable.
The system enforces:
Yes—partners can have branded experiences while keeping governance consistent.
Experiences can be vendor-branded, partner-branded, or co-branded—tailored in tone/layout without changing underlying rules: what’s allowed, who receives it, and how escalations work.
Onboarding → success/reactivation → accountability → satisfaction (PSAT) → coaching/recaps → support deflection.
Partners don’t fail because they lack PDFs—execution degrades over time. Cast supports:
Yes—partner accountability can be measured and compared, not argued about.
You can track engagement, PSAT trends, renewal/expansion indicators by partner-managed segment, and partner health signals (risk/inactivity/regressions/improvement).
Partners get fast answers from approved sources; escalations happen only when needed and route with context.
Cast deflects repetitive partner questions using approved KB + historical ticket patterns.
When escalation is required, it routes with a handoff bundle (summary + action request + supporting context) so internal teams don’t re-triage from scratch.
It supports three fronts: direct customers, partners (and indirect customers), and internal AM/renewal orgs.
In many enterprises, revenue outcomes depend on influencing:
Yes—the same style of approachable business reviews, adapted to each party’s role.
Partners and indirect customers can receive role-specific summaries, AMA, and role-appropriate calls-to-action (partner tasks vs end-customer tasks vs vendor tasks).
Routing can follow your operating rules—partner-first, vendor-first, or tiered—without breaking the experience.
Escalation can be configured so partners handle first-line issues where appropriate, vendors handle higher-severity cases (SLA/ARR/priority thresholds), and handoffs always include context.
Cast supports 17 spoken languages to cover 97.6% of the global B2B demand market. (TODO: add link)
Cast can present customer experiences across spoken audio, transcripts/captions, presentation content, and the presentation player UI—with no additional effort—so global teams can deliver consistent onboarding, business reviews, and support experiences across regions without maintaining separate content per language.
Practical advantage: you don’t need to hire and staff incremental CSM capacity in every market just to deliver consistent, local-language coverage.
Cast supports:
A short, executive-ready briefing that highlights what changed, what matters, and what to do next—optimized for fast reading and easy escalation.
Executives don’t want a portal, a dashboard hunt, or a 30-slide deck.
Executive QuickBrief is designed to deliver risk, wins, ROI/value, and next actions in a repeatable format—often inbox-first.
Naming note: Some teams still say “CliffsNotes” as shorthand, but we renamed it to Executive QuickBrief to avoid confusion with CliffsNotes, the study-guide brand (owned by Course Hero).
PMF describes the degree to which a product satisfies strong market demand.
Scale (Sean Ellis Test — 4-point Likert):
“How would you feel if you could no longer use {{product}}?”
Interpretation: If >40% of users answer “Very disappointed,” that’s a strong PMF signal; if it’s <40%, the product likely needs work.
A measure of how much a new product/feature/service improves the customer experience versus an alternative.
A measure of how much a new product/feature/service improves the customer experience versus an alternative (previous version or competitor), defined as the difference between two experience scores.
Scale: Both questions use the same scale—either 0–10 (11-point) or 1–10 (10-point, preferred)—and:
Delta4 = Score(new) − Score(alternative)
What “4” means: A Delta4 score ≥ 4 indicates a significant improvement (often described as behavior-changing).
Often attributed to Kunal Shah (referenced as “Delta-4 Theory by Kunal Shah”).
Feedback captured through a dialogue (not a form), then summarized into themes and actions.
Conversational feedback reduces survey fatigue by asking follow-ups only when needed, converting answers into structured themes, and creating a clear close-the-loop output (what was heard → what changed → what’s next).
To make value defensible, reduce renewal ambiguity, and turn expansion into a logical next step.
Sharing ROI/value works best when grounded in agreed inputs, tied to customer outcomes, and paired with next actions.
It aligns internal + customer stakeholders and prevents renewal-surprise conversations.
A standard for tool/data access so agents can call approved functions and data sources through a consistent interface.
A standard for tool/data access so agents can call approved functions and data sources through a consistent interface.
A governed gateway that connects MCP-style tools to legacy systems, enforcing permissions, logging, and safe outputs.
A governed gateway that connects MCP-style tools to legacy systems, enforcing permissions, logging, and safe outputs.
A protocol for agents to communicate/coordinate reliably.
A protocol for agents to communicate/coordinate reliably.
A protocol for structured agent communication across a multi-agent system.
A protocol for structured agent communication across a multi-agent system.
A handoff pattern where the agent packages context (summary + action request + supporting signals) and routes to the right human.
A handoff pattern where the agent packages context (summary + action request + supporting signals) and routes to the right human.
A return handoff pattern where the human decision/outcome is captured so automation can resume cleanly with context.
A return handoff pattern where the human decision/outcome is captured so automation can resume cleanly with context.
How easy it was for the customer to get value or resolve an issue (higher = easier / lower effort).
Likert scale (common 7-point): 1 = Very difficult … 7 = Very easy
How much effort onboarding took from the customer’s perspective (time, steps, friction, back-and-forth).
Used to spot onboarding drag early, course-correct delays, and speed time-to-value.
Likert scale (recommended 7-point): 1 = Very difficult … 7 = Very easy
A direct satisfaction score tied to an interaction or experience.
Likert scale (common 5-point): 1 = Very dissatisfied … 5 = Very satisfied
CSAT-equivalent for partners—how satisfied partners are with the program, support, and outcomes.
Likert scale (common 5-point): 1 = Very dissatisfied … 5 = Very satisfied
Starting revenue minus churn (logo + revenue churn) plus expansion (upsells/cross-sells/add-ons) over a period.
Starting revenue minus churn (logo + revenue churn) plus expansion (upsells/cross-sells/add-ons) over a period.
NRR expressed in dollars, accounting for currency conversion/exchange effects (often similar operationally, but important for finance).
NRR expressed in dollars, accounting for currency conversion/exchange effects (often similar operationally, but important for finance).