Why Chatbots Miss the Moments Customers Never Ask About
Chatbots and live chat are useful tools. For customers who know they have a question and choose to ask it, a well-designed chat experience can resolve things quickly, surface the right information, and keep the customer from having to call. That is a real service.
The limitation is baked into the initiation model: something has to prompt the customer to engage. They have to notice the widget, decide to use it, and type something in. That is three separate decisions a customer has to make before the conversation starts.
A meaningful share of stuck customers never make those three decisions. They just leave.
The Customers Who Do Not Knock
A first-time homebuyer is on a mortgage application page. They have been reading through the income verification requirements. They click back to the application, scroll to the income section, and stop. They go back to the requirements page. They re-read the same two paragraphs. Then they close the tab.
There is a chat widget in the bottom corner of the page. They never clicked it.
Why? Maybe they did not want to talk to someone. Maybe they assumed the chat would give them the same FAQ content they already read twice. Maybe the widget did not seem relevant to the specific confusion they had. Maybe they just did not notice it. Any of those explanations is plausible, and the outcome is the same: the customer had a resolvable question and left without resolution.
Chat waited for them to knock. They did not knock.
This is not a criticism of chat. It is a description of a population that chat structurally does not reach. As we discuss in "Are AI Agents for CX BS?", the question is not whether AI-assisted tools are useful — many of them are. The question is which moments they are actually built for.
What Proactive Detection Looks Like
The repeated-requirements-page pattern is a behavioral signal. A customer who navigates to a supporting page, returns to the application, and navigates back has told you something without saying a word. The exit is coming. The question is whether anything happens first.
Pulse detects that pattern and acts on it. A short, targeted prompt — "Is something unclear about what documents you need?" — appears before the customer gives up. The response options are pre-approved and specific to that page. If one of them matches what the customer was confused about, they get the answer. The application continues.
This is the distinction between reactive waiting and proactive detection. Chat waits for the customer to raise their hand. Pulse notices when the customer is wandering around looking lost and goes to them.
What Proactive Detection Is Not
This is worth saying plainly. Pulse does not intercept every visitor. It does not throw a modal at someone who just arrived. The trigger is a behavioral pattern that signals genuine hesitation — repeated navigation, unusual dwell, a sequence of actions that does not match a customer who knows what they are doing.
The diagnostic question is short and specific. The approved responses are curated by the team. The goal is to help a customer who is actually stuck, not to add friction in the name of reducing it.
And it is not a replacement for chat. For customers who want to ask a question in their own words and have a conversation, a chat option is still the right tool. The population that Pulse reaches is specifically the one that chat does not: the customer who is confused, who is showing it behaviorally, and who is not going to type anything into a chat box.
The Measurement Case
The comparison worth running is engagement on a proactive Pulse prompt versus chat widget engagement for the same journey segment. If the same page has both, how many customers use each? For a page with known friction, the proactive prompt engagement rate tends to be considerably higher, because it meets customers at the moment of hesitation rather than waiting for them to self-identify.
The harder number to track, but the one that matters most, is journey continuation rate: for customers who received a proactive response versus those who encountered the same page without one, how many completed the next step? That number tells you what the intervention was worth.