How to Use AI in Customer Journeys Without Letting It Go Off-Brand
A retailer deployed a chatbot on their returns page. The chatbot worked. It answered questions, handled common scenarios, and generally kept customers moving.
Then someone noticed it was suggesting that customers "try contacting the manufacturer directly."
Not wrong. But completely outside what the support team had agreed to. The support team's job was to own that customer relationship, handle that return, and keep the customer happy. Offloading them to a manufacturer was outside scope, outside brand, and the kind of thing that ends up in a supervisor's inbox.
The chatbot had composed that response live. Nobody had reviewed it. Nobody had approved it. Nobody had written it down anywhere. It had simply seemed reasonable to the model.
That is the core problem with deploying AI in customer journeys without a content governance model. The AI fills in the gaps. It does this confidently, fluently, and at scale. And the gaps are exactly where brand breaks down.
Start With the Worst Case
Before you deploy anything, ask one question: what is the worst thing this AI could say to a customer on this page?
This sounds pessimistic. It is actually practical. If you can enumerate the worst plausible outputs, you can design against them. If the answer is "I have no idea," you do not have a governance model yet.
For a returns page, the worst case might be: suggesting the customer cannot return an item when they can, referencing a policy that changed last quarter, or pushing them toward a channel the support team does not staff. For a loan application page, the worst case involves something that creates a fair lending implication. For a healthcare portal, the worst case involves clinical-sounding language that a patient misreads.
Naming the worst case forces you to define the boundaries of what the AI is allowed to say. That is not a creative limitation. It is the spec.
The Approval Workflow
Once you know the scope, someone has to author the content and someone has to approve it. Those can be the same person on a small team. On an enterprise team, they usually are not.
A practical approval workflow looks like this:
Who writes the responses? The team closest to the customer moment. For a support page, that is the support team. For a pricing page, that is product or sales. They know what customers actually ask and what the correct answer actually is.
Who reviews for brand? Brand or content strategy, whoever owns the voice. They check that the tone matches, that the claims are accurate, that nothing is ambiguous.
Who approves for compliance or legal? Depends on your industry. Financial services, healthcare, and insurance have obvious stakeholders here. Retail and SaaS teams often skip this step, which is usually fine until it is not.
What is the fallback if none of the responses fit? This is often forgotten. The fallback behavior should be deliberate: show nothing, show a generic "contact us," or close quietly. An AI that defaults to generating something when no approved response matches is an AI without a boundary.
What On-Brand Means in Practice
On-brand in an AI intervention is not just about tone, though tone matters. It is about scope.
The returns page chatbot should stay on returns. The pricing page assistant should stay on pricing. The onboarding flow helper should stay on onboarding. When the AI starts connecting dots across domains, even helpfully, it is usually operating outside what anyone reviewed.
Practically, this means:
The approved responses for each touchpoint are scoped to that touchpoint
The diagnostic question Pulse asks is written to surface the specific friction that touchpoint creates, not general customer sentiment
The escalation path (what happens when the customer needs more than the approved response can give) is an explicit design choice, not an AI decision
This is how Pulse is built. A customer on the returns page who selects "I am not sure if my item is eligible" gets a response the support team wrote for that exact scenario. The AI detected that the customer was stuck and asked one diagnostic question. The response they see was reviewed, approved, and scoped to that page.
Ongoing Maintenance
Brand-safe AI is not set-and-forget. Content goes stale. Policies change. Return windows get updated. Pricing tiers get renamed.
The question to answer before you deploy is: who owns the maintenance? Not in theory. In practice. Who reviews the approved responses when something changes in the product? Who gets notified when a policy updates? Who has the access and the context to update the content in time?
If the answer is "the agency we used to build it" or "we will figure that out later," the deployment will drift. The AI will start delivering responses that were accurate six months ago and are now slightly wrong, in ways nobody notices until a customer points it out.
Good deployment architecture treats the response library like a content asset, not a one-time technical artifact. It has owners, review cycles, and update processes, the same as any other customer-facing content.
Why This Matters for AI Generally
The pattern here generalizes. As we wrote in "Are AI Agents for CX BS?", the question for enterprise AI is not whether the model is capable. The question is whether the system is governable enough to ship and stay live.
Brand guardrails, when they are designed in from the start, are not what slows AI down in enterprise environments. They are what gets it through procurement, legal, and comms review. They are what keeps it running after the first edge case hits.
The teams that deploy AI successfully in customer journeys are not the ones that fought hardest to remove the constraints. They are the ones that made the constraints specific enough that everyone could sign off.
That is the real architecture. Not the model. The governance around it.