Your error tool doesn't know who your user is.
Your analytics tool doesn't know the backend broke.

NavFlow knows both. An AI tool that connects your error tools, your analytics, and your deploys, and tells you what silently changed when a metric moves and no alert fires.

Sentry sees
Errors in window: 0checkout endpoints clean
last 4h · 1,840 sessions
Missing
  • that the product is silently behaving differently
  • that 31% of SKUs lost a field
  • that anything changed at all
Mixpanel sees
Basket size −12%checkout flow
last 4h · 1,840 sessions
Missing
  • users changed their minds, or
  • the product changed under them?
  • what shipped that caused it?
NavFlowsees both
basket size−12%no errors fired
PR #4521 changed RecommendationsResponse · priceWithDiscount null for 31% of SKUs since v4.13
Verdict

Silent contract change. Not user behavior.

Pick a scenario. See what lands in Slack.

Each example walks through one agent's reasoning chain. The card at the bottom is the message NavFlow would post.

Silent Regression Watcher
idle

Click Run agent to step through the agent's reasoning.

    agent catalog

    Your metric moved. Now you know why.

    A basket size dips. An activation funnel softens. A release lands flat. An experiment is ready to call. NavFlow runs the diagnosis behind the dashboard line, and posts the cause to where you decide.

    01/Silent Regression Watcher
    silent regression · checkoutdetected
    checkout · basket size
    Basket size dropped −12%. No errors fired.
    cause
    PR #4521 changed RecommendationsResponse
    fields
    priceWithDiscount null for 31% of SKUs
    since
    v4.13 · deployed 24m ago
    VerdictSilent contract change. Not user behavior.

    produces the silent product change behind a moving metric

    Silent Regression Watcher

    The most expensive thing that can happen to a product manager: a deploy goes green, Sentry stays quiet, QA passed, and a metric you own starts moving anyway. Silent Regression Watcher catches the contract changes, feature drops, and config flips your engineering tools won't fire on. You see the cause attributed to a PR or an API contract diff, not just the symptom on your dashboard. It's the one agent that needs to read both your shipping signals and your product signals to do its job.

    readsPostHogGitHubVercelSentry
    02/Activation Diagnoser
    activation · onboardingdrift detected
    cohort · iOS Safari · new users · 1,420 sessions
    01signup100%
    02email confirmed92%↓ 1.4
    03first project created78%↓ 2.1
    04first action taken53%↓ 14
    cause
    /welcome p95 latency up +1.4s since v4.13
    errors
    none on onboarding routes (last 24h)
    VerdictPerformance regression on the onboarding path.

    produces an activation drop diagnosis, by cohort and root cause

    Activation Diagnoser

    Activation drops are diagnostic landmines: which cohort, which platform, which step, what changed. Activation Diagnoser correlates onboarding drops with recent releases, performance regressions on onboarding routes, and feature-flag flips. New users are the cohort that silent regressions hit first, because their flow is the most brittle and the least familiar to the team. By the time activation drops show up in your weekly review, the cause has been live for a week.

    readsPostHogGitHubVercel
    03/Release Impact
    release v4.13 · "Improve add-to-cart CTR"didn’t land
    target metric vs other product metrics
    add-to-cart CTR7.2%↓ 0.1flat
    checkout conversion3.4%↑ 0.0moved
    errors on checkout0.18%↓ 0.04moved
    why flat
    flag cart-cta-v2 ramped to 38% (target 100%)
    exposure
    38% of users saw the new CTA
    VerdictThe release didn't land where you thought.

    produces a verdict on a release against the metric it promised to move

    Release Impact

    Every production release gets graded against the metric the PR was supposed to move. Release Impact joins the deploy boundary to the metric you own, rules out errors as the cause, and surfaces when a feature shipped but didn't land. The most common reason is the most expensive: you shipped X, the metric stayed flat, and nobody noticed because no alert fired. Often the cause is a flag that didn't ramp, a target audience that was filtered out, or a silent regression the release introduced.

    readsVercelGitHubPostHogSentry
    04/Experiment Verdict
    experiment exp-247 · pricing v2ship with caveats
    primary metric vs everything PostHog isn’t showing
    conversion+6.1% on variant B p=0.04primary
    revenue$/visitor flat · variant lifts free, not paidcaveat
    errorsTypeError on checkout +18% on variant Bcaveat
    deploysno contamination · 1 unrelated deploy in windowclean
    VerdictVariant B converts. Variant B earns less.

    produces a ship/hold/kill verdict, with the caveats the primary metric won't surface

    Experiment Verdict

    Significance on the primary metric isn't a ship decision. Experiment Verdict reconciles your A/B against revenue per converter (not just count), variant-specific error rates, and unrelated deploys that contaminated the window. The reconciliation is the value: it's the difference between shipping a variant that lifts signups but suppresses revenue, and shipping one that genuinely moves the business. One experiment at a time, posted to Slack, with the caveats PostHog can't surface.

    readsPostHogStripeSentryVercel

    No charts. No dashboards. Just verdicts.

    Connect the tools you already use, pick what to watch, get verdicts where you already work.

    Sources
    Vercel
    GitHub
    Linear
    Sentry
    PostHog
    What NavFlow does
    Read. Diagnose. Decide.
    • follows the same user across tools
    • reads what changed and when
    • ties metric drops to shipping events
    • writes a verdict you can act on
    Delivers to
    Slack
    primary
    Email
    digests
    Linear
    tickets
    Notion
    log
    01Connect
    One click for the tools you already use. No config files, no setup wizard.
    02Pick what to watch
    Pre-built around the metrics you already report on. Mute what's noisy.
    03Get verdicts
    Plain-English answers in Slack. Loop in eng when there's something for them to do.

    The product manager whose dashboard moved, and whose alerts didn't fire.

    The product manager who feels a metric move before anyone else, and has to figure out, themselves, what changed.

    If these describe you
    • You open Mixpanel and Sentry in the same morning, looking for the same answer.
    • You answer "did the product change, or did the users?" before anyone else does.
    • You match funnel drops to whatever shipped that morning. By hand.
    • You translate what shipped for leadership and what dropped for engineering, sometimes in the same Slack thread.
    • You don't have a data team to hand this off to, and engineering is heads-down on the next sprint.
    • You care about what actually moved the metric, not just one tool's view of it.

    Free for one. Per seat once you're a team.

    Pricing at launch. Early-access teams keep founder pricing for the first 12 months.

    Free

    For one PM testing on a single project.

    $0forever
    • Just you
    • 1 agent enabled
    • 100 verdicts / month
    • Verdicts to Slack
    • Community support (Slack)
    Get started

    Team

    For teams that own product, ops, and customer in one room.

    $49/ seat / month
    • Add teammates
    • Every pre-built agent
    • Unlimited verdicts
    • Verdicts to Slack, email, CRM
    • Reads from every source
    • Priority support (24h response)
    Get started

    Custom

    For larger orgs that need self-hosted, SSO, or their own agents.

    Let's talk
    • Everything in Team
    • Custom agents
    • Self-hosted option
    • Security review & SSO
    • Dedicated CSM + Slack channel
    Talk to us

    The things people ask before trying it.

    PMs. NavFlow is for the product manager who has to answer for a metric without a data team to defer to. Engineers benefit downstream, they stop being the first call when something silently changes, but the page is built for the PM.