Session Replay Shows the Problem After the Customer Is Gone

Session replay is one of the most honest tools in the UX research kit. There is nothing quite like watching a real customer try to use something you built. You see the confusion you designed in. You see the element they ignored that you thought was obvious. You see the exact moment they gave up.

For anyone who has spent time building digital products, session replay is humbling in the best way.

But it has a built-in constraint that matters for how you use it: you are always watching the past.

A Movie You Cannot Rewrite

The value of session replay is that it shows you what actually happened, not what you assumed happened. A customer scrolls past a feature three times without clicking it. Another rage-clicks a button that was not responding. A new user fails to find the import function, wanders into the wrong section, and quietly leaves. These are real problems that might not show up in any other signal.

The limitation is that the movie has already ended. The customer you are watching made their decision before you pressed play. Whatever they needed in that moment, they did not get it, and they are not coming back to try again.

This is by design, not a flaw. Session replay is built for retrospective analysis: understand what broke, fix it, prevent the same experience for the next person. That is a legitimate and important use case.

The problem comes when teams treat it as their only window into customer friction. Because that window looks backward.

What a Researcher Sees vs. What the Customer Needed

Here is a scenario that UX researchers recognize. You pull up a replay of a new user's onboarding session. They are trying to import their data. They scroll through the interface, scroll back, click into a settings section that is not it, return to the main screen, scroll again. The import function is right there, but the label is not clicking for them. After four minutes of this, they leave.

Watching the replay, the problem is obvious. The label needs to change, or a tooltip needs to exist, or the information architecture needs to be rethought. You file the insight, write up the recommendation, and queue the fix.

That specific user is gone. And before you ship the fix, more users will have the same experience.

Pulse operates before the exit. The repeated failed navigation pattern — scrolling back, clicking the wrong section, returning — is a behavioral signal that appears while the session is still happening. When that pattern fires, Pulse can ask: "What are you looking for?" and surface a short set of options that route directly to the right feature.

The insight that session replay gives you after the fact, Pulse can act on in the moment. As we discuss in "The Anatomy of a Stuck Moment," the window between a customer getting stuck and a customer leaving is often where the whole outcome is decided.

Where the Two Tools Actually Fit

Session replay is irreplaceable for understanding why friction exists. It shows you the experience with a level of fidelity that no other tool matches. It is the right input for design decisions, UX audits, and product prioritization.

Pulse is for acting on friction signals in real time, before those design decisions have been made and shipped. It does not replace session replay. It fills the gap between "we know this is confusing" and "we've fixed it."

The measurement case for real-time intervention is straightforward: track feature adoption rate and onboarding completion for users who received a navigation prompt versus those who encountered the same friction without one. That comparison tells you how much the stuck-moment response is worth while you are still working on the underlying fix.

Session replay gives you the diagnosis. That is valuable. The patient still needs the treatment.

Read More
Connect, configure and preview