>unstuck engine

Handbook / 22 · How We Communicate

> Core POV

These aren't marketing talking points. These are mechanisms we believe govern how B2B GTM actually works—regardless of what LinkedIn gurus say.

Ivan Kovpak — CEO & Co-founder of Unstuck Engine
Ivan KovpakCEO & Co-founder
14 min readLast reviewed May 7, 2026

These aren't marketing talking points. These are mechanisms we believe govern how B2B GTM actually works—regardless of what LinkedIn gurus say.

POV 1: On Targeting

Most companies believe:
"Our ICP is mid-market SaaS, 50-500 employees, $5M-$50M ARR, using Salesforce."

What's wrong:
This is a description, not a hypothesis. It can't be tested, refined, or killed. It's a fixed document that lives in a deck, gets shared once, and never gets revisited.

The problem: markets shift. What worked in Q1 might not work in Q3. Your best customers from 2023 might be your worst in 2024. But if your ICP is a static document, you'll keep targeting based on outdated beliefs.

What's true:
Targeting is a portfolio of testable hypotheses, run in parallel, measured against revenue metrics (NRR, win rate, velocity, CAC), with kill criteria.

An ICP experiment looks like this:

  • Hypothesis: "Enterprise fintech companies with recent Stripe integration will convert at >40% and close in <60 days"
  • Test criteria: 50 accounts minimum
  • Success metrics: >35% SQL→Close, <75 day cycle, >$50K ACV
  • Kill criteria: <20% SQL→Close after 30 accounts, or >90 day cycle
  • Timeline: 60 days to validate or kill

You run 3-8 of these simultaneously. Winners scale. Losers die. Learning compounds.

Implication:
You need infrastructure for ICP experimentation, not static Ideal Customer Profile documents. Infrastructure that lets you define hypothesis, track cohort performance, compare against control, auto-promote winners, auto-kill losers.

Without this infrastructure, targeting stays religious instead of scientific.

POV 2: On Lead Scoring

Most companies believe:
"We score leads 0-100 combining fit + behavior. >70 = MQL."

What's wrong:
Single scores conflate dimensions. High-fit/no-intent gets the same score as low-fit/high-intent. Both waste resources, but differently.

Example:

  • Account A: Perfect ICP fit, zero engagement = Score 65
  • Account B: Wrong ICP fit, downloaded 3 whitepapers = Score 65

You treat them identically (nurture sequence). But they need opposite treatment:

  • Account A needs activation (they're qualified, just not engaged yet)
  • Account B needs disqualification (they're engaged but will never close)

The single score hides the problem.

What's true:
You need three independent axes:

Axis 1: ICP Fit (0-100)

  • Firmographics (size, industry, growth)
  • Technographics (current stack, integration needs)
  • Strategic fit (business model, market position)
  • Updated quarterly or when company changes

Axis 2: Persona Match (categorical)

  • Decision-maker (economic buyer, final authority)
  • Champion (internal advocate, no budget)
  • Influencer (provides input, no decision)
  • End-user (uses tool, doesn't purchase)
  • Unknown (not enough data)

Axis 3: Intent Stage (A-E bands)

  • A: Hot (multiple high-intent signals, recent, concentrated)
  • B: Warm (some signals, exploratory behavior)
  • C: Tepid (minimal signals, passive consumption)
  • D: Cold (no signals, zero engagement)
  • E: Dead (negative signals, unsubscribed, competitor chosen)

Three dimensions. Never collapsed into one number.

Why? Because orchestration requires knowing which dimension is the problem:

  • High fit + Decision-maker + Hot intent → SDR queue (call today)
  • High fit + Decision-maker + Cold intent → Brand campaigns (build awareness)
  • Low fit + Champion + Hot intent → Archive (they're engaged but will never close)

Each combination routes differently. Single score can't route correctly.

Implication:
Orchestration requires multi-dimensional scoring. Your CRM's 0-100 lead score can't do this. You need a layer that maintains three independent dimensions and routes based on combinations.

POV 3: On GTM Channel Mix

Most companies believe:
"We'll spend 40% on paid ads, 30% on content, 20% on events, 10% on outbound. Because that's what worked last year."

What's wrong:
Treats channels as independent budget buckets. Ignores that channel effectiveness depends on buyer readiness (intent + fit).

A high-fit/cold-intent account seeing your LinkedIn ad won't convert. They're not ready. You burned budget forcing the wrong motion.

A high-fit/hot-intent account stuck in a nurture drip for 6 weeks will buy from whoever reaches them first. You lost velocity.

Budget allocation ignores signal state.

What's true:
Channel effectiveness is function of buyer readiness. Progressive investment model:

Stage 1: Low Intent (D-E) + High Fit

  • Objective: Build awareness, establish credibility
  • Channels: Brand campaigns, thought leadership, long-form content, community, events
  • Measurement: Brand lift, share-of-voice, consideration set inclusion
  • Investment: 15-25% of GTM budget
  • Mistake to avoid: Don't do outbound here (wastes SDR time on unready buyers)

Stage 2: Medium Intent (C) + High Fit

  • Objective: Nurture, educate, stay top-of-mind
  • Channels: Email sequences, retargeting ads, webinars, case studies, demo videos
  • Measurement: Engagement depth, time-to-hot, progression rate
  • Investment: 30-40% of GTM budget
  • Mistake to avoid: Don't do hard sales here (kills trust before they're ready)

Stage 3: High Intent (A-B) + High Fit

  • Objective: Convert, accelerate, close
  • Channels: SDR outbound, sales calls, personalized demos, ROI calculators
  • Measurement: SQL→Close rate, cycle time, ACV, CAC
  • Investment: 35-45% of GTM budget
  • Mistake to avoid: Don't put these in nurture (competitors will reach them first)

Stage 4: Any Intent + Low Fit

  • Objective: Archive
  • Channels: None
  • Investment: 0%
  • Mistake to avoid: Don't nurture forever (opportunity cost kills you)

The mix adjusts based on distribution of your accounts across intent stages, not based on what you did last year.

Implication:
You need signal-based channel orchestration, not budget theater. Infrastructure that routes accounts to appropriate channel mix based on three-dimensional scoring updated continuously.

Without this, you're running brand campaigns for accounts ready to buy (velocity loss) and running outbound to accounts not ready (conversion loss). Both waste money, just differently.

POV 4: On Sales Capacity

Most companies believe:
"Pipeline's down. We need to hire 5 more SDRs."

What's wrong:
Assumes targeting is correct, just need more dials. Usually targeting is broken.

If your reps are calling low-fit accounts or accounts at wrong intent stage, hiring more reps means paying more people to waste time on wrong targets. You've scaled the problem, not solved it.

The math:

  • 1 rep calling 100 accounts/day with 15% fit rate = 15 good conversations
  • 10 reps calling 1,000 accounts/day with 15% fit rate = 150 good conversations
  • 1 rep calling 50 accounts/day with 80% fit rate = 40 good conversations

Better targeting beats more headcount.

What's true:
100× one rep with precision targeting beats hiring 10 average reps with spray-and-pray.

How to 100× a rep:

  1. Give them only high-fit accounts (eliminates 70% of wasted calls)
  2. Give them only ready-to-buy intent stages (eliminates 60% of "not now" responses)
  3. Give them persona-matched talking points (eliminates 50% of "wrong person" transfers)
  4. Give them live signal context (eliminates 80% of generic pitches)

Combined: rep goes from 40 connects/day with 2% conversion (0.8 meetings) to 20 connects/day with 15% conversion (3 meetings). Meetings per rep up 275% with half the dials.

The SDR doesn't work harder. The targeting works smarter.

Implication:
Amplification > Addition. Tooling that multiplies rep effectiveness beats headcount. Before hiring rep #11, ask: "Have we maximized reps 1-10 with better targeting?"

Infrastructure that delivers daily Hot-50 lists (high fit + decision-maker + hot intent) to each rep does more for pipeline than hiring 3 more SDRs with average lists.

POV 5: On Data Quality

Most companies believe:
"We enrich annually and refresh lists quarterly. That's good enough."

What's wrong:
Data decays 5% monthly. After 12 months, you're working with 60% accurate data. After 6 months, 70% accurate.

Buyer intent changes hourly. The CFO researching tools today might select a vendor tomorrow. Your quarterly refresh means you learn about the intent spike in 8 weeks—after they've already signed with competitor.

Stale data = stale targeting = missed opportunities.

The cost:

  • Low accuracy: calling people who left company, targeting accounts that closed, pitching solutions to companies that already bought competitor
  • Low velocity: engaging accounts 4-6 weeks after intent peaked, showing up to party after it ended

What's true:
GTM operates in real-time now. Signal latency = opportunity loss.

Real-time means:

  • Account hits pricing page → scored hot within 1 hour → in SDR queue by EOD
  • Buying committee member changes jobs → persona tag updates within 24 hours → sequences pause automatically
  • Tech stack changes detected → ICP fit recalculated within 1 day → routing adjusts
  • Competitor mention appears → alert triggered within 4 hours → sales notified
  • Intent drops (unsubscribe, competitor chosen) → removed from active campaigns within 2 hours → stops burning budget

The gap between signal and action determines who wins the deal.

Implication:
You need always-on data refresh + continuous re-scoring. Annual enrichment is buying last year's data and calling it current. Quarterly list building is learning about intent spikes 8 weeks late.

Infrastructure that ingests signals continuously, recalculates scores hourly, updates routing automatically—that's the minimum bar now.

POV 6: On ICP Definition

Most companies believe:
"We had a workshop in Q1. Built ICP deck. Shared with team. Done."

What's wrong:
ICP treated as fixed truth, not hypothesis. No kill criteria. No success metrics. No revisit cadence. Just permanent doctrine.

The problem: if the ICP is wrong, you keep targeting wrong accounts forever. If the market shifts, you keep targeting based on old market. If your product evolves, you keep targeting for old product.

Real example:

  • 2023 ICP: "Mid-market SaaS with $10M-$50M ARR"
  • Result: 18% win rate, 94-day cycle, $35K ACV
  • 2024 reality: That segment commoditized, budget frozen, decision committees expanded
  • Company kept targeting it because "that's our ICP"
  • Burned 9 months before someone questioned it

What's true:
ICP = hypothesis tested against revenue metrics (NRR, win rate, velocity, CAC). Losing ICPs get killed fast. Winning ICPs scale immediately.

ICP as hypothesis means:

Define success upfront:

  • Target win rate: >30%
  • Target cycle: <75 days
  • Target ACV: >$40K
  • Target NRR: >110%
  • Test size: 50 accounts minimum
  • Timeline: 60 days

Set kill criteria:

  • Win rate <15% after 30 accounts → kill
  • Cycle >120 days average → kill
  • ACV <$25K average → kill
  • 3 consecutive lost deals to same objection → kill

Measure continuously:
Every deal updates cohort metrics. Dashboard shows: ICP 1 (winning), ICP 3 (marginal), ICP 5 (losing). Kill ICP 5 this week. Scale ICP 1 immediately.

The culture shift: from "strategy as religion" to "strategy as lab." From "defend the ICP" to "kill the losers fast."

Implication:
You need infrastructure that tracks ICP cohorts separately, measures each against revenue metrics, flags losers automatically, scales winners immediately.

Without this, your ICP stays frozen even when reality changed. And you waste quarters targeting based on outdated beliefs.

POV 7: On Marketing-Sales Handoff

Most companies believe:
"Lead hits MQL threshold → enters sales queue → first available rep takes it."

What's wrong:
No prioritization. High-fit accounts stuck behind low-fit. Hot-intent buried under cold-intent. Sales reps cherry-pick based on company name recognition, not signal quality.

The consequence:

  • Enterprise account with hot intent waits 4 days for response (buys from competitor)
  • SMB account with zero intent gets called immediately (wastes 30 minutes)
  • Decision-maker gets ignored while rep chases end-user (wrong persona)

First-come-first-served doesn't work when opportunities have different value and different readiness.

What's true:
Handoff should be signal-based routing:

Route 1: High Fit + Decision-Maker + Hot Intent (A)
→ SDR priority queue (call within 4 hours)
→ Sales Navigator hot list (real-time updates)
→ Salesforce high-priority view (top of dashboard)

Route 2: High Fit + Champion + Warm Intent (B)
→ Personalized nurture sequence (persona-specific)
→ Account-based ads (if opted-in)
→ Sales alert (monitor, don't call yet)

Route 3: High Fit + Any Persona + Cold Intent (D)
→ Brand campaigns (long-term awareness)
→ Quarterly check-in (low-touch)
→ Community invitation (if relevant)

Route 4: Low Fit + Any Persona + Any Intent
→ Archive immediately
→ No campaigns (stop burning budget)

Each route has different SLA, different motion, different resource allocation.

The handoff isn't "lead generated → sales takes over." It's "signal combination detected → appropriate motion triggered automatically."

Implication:
You need orchestration layer between marketing automation and sales engagement. Layer that reads three-dimensional scores, determines route, triggers appropriate motion, updates routing as signals change.

Your marketing automation can't do this (only knows behavior, not fit). Your CRM can't do this (stores data, doesn't route). You need the intelligence layer.

POV 8: On Measurement

Most companies believe:
"We track dials, emails sent, MQLs generated, pipeline created. We're data-driven."

What's wrong:
Activity metrics measure effort, not trajectory. You can have record MQL month with zero revenue impact.

Real example:

  • Q2 report: "MQLs up 47%, dials up 62%, emails up 83%"
  • Board: "Great work!"
  • Q3 result: Revenue miss by 23%
  • Board: "What happened?"
  • Answer: "We don't know. Activity was up."

The problem: activity metrics don't predict revenue. You measured inputs (effort) instead of outputs (trajectory).

What's true:
Only revenue metrics matter: NRR, win rate, velocity, CAC. Everything else is proxy.

The hierarchy:

Tier 1: Revenue Metrics (only ones that matter)

  • Net Revenue Retention (expansion - churn)
  • Win Rate (SQL→Close %)
  • Sales Cycle Velocity (days from SQL→Close)
  • Customer Acquisition Cost (total spend / new customers)

Tier 2: Leading Indicators (predict Tier 1)

  • ICP Fit distribution (% of pipeline that's high-fit)
  • Intent Stage distribution (% of pipeline that's hot)
  • Persona Match accuracy (% reaching decision-makers)
  • Signal response time (hours from intent spike → outreach)

Tier 3: Activity Metrics (measure effort, don't predict outcomes)

  • Dials made
  • Emails sent
  • MQLs generated
  • Meetings booked

Most companies optimize Tier 3, measure Tier 2 sometimes, and explain Tier 1 misses with excuses.

Winning companies optimize Tier 1, use Tier 2 to diagnose problems, and ignore Tier 3 unless Tier 1 drops.

Implication:
Tools must tie upstream actions to downstream revenue, not just track activity. Infrastructure that shows: "ICP Hypothesis 3 generated $2.4M pipeline at 34% win rate and 52-day cycle" beats infrastructure that shows "We made 12,847 dials this month."

Your sales engagement platform tracks Tier 3. Your BI dashboard tracks Tier 2 sometimes. You need the layer that connects actions to Tier 1 outcomes.

POV 9: On AI in GTM

Most companies believe:
"AI SDRs will replace humans. Just deploy the bots and scale outreach 100×."

What's wrong:
Relationships close deals. Bots generate spam at scale. Reply rates crater.

The math:

  • Human SDR: 100 targeted emails/day, 8% reply rate, 5% meeting rate = 5 meetings/day
  • AI SDR: 5,000 emails/day, 0.3% reply rate, 0.1% meeting rate = 5 meetings/day
  • Same meeting output. Different reputation cost.

The 5,000 emails burned:

  • Your domain reputation (spam filters activated)
  • Your brand reputation (buyers associate you with spam)
  • Your relationship capital (burned bridges before human conversation)

You got same meetings, but destroyed ability to do outbound in future.

What's true:
AI should amplify humans (better targeting, prioritization, context) not replace them (worse relationships).

AI as exoskeleton:

  • AI does: Aggregate 20 signal sources, score 10,000 accounts three-dimensionally, identify 50 ready-to-buy daily, provide context on each (recent intent, tech stack, hiring signals, competitor research)
  • Human does: Review Hot-50 list, prioritize based on strategic value, craft personalized message using context, have actual conversation, close relationship

Rep goes from 100 random dials (2% connect, 10% of those convert = 0.2 meetings/day) to 50 precision dials (30% connect, 40% of those convert = 6 meetings/day).

Meetings per rep up 3,000%. Because targeting improved, not because automation replaced judgment.

Implication:
You need intelligence layer that makes humans 100× more effective, not robots that make spam 100× faster.

Infrastructure that delivers "these 50 accounts are high-fit + decision-maker + hot intent + here's the context" beats infrastructure that sends 5,000 templated emails.

The future: super-powered sellers, not seller-less automation.

POV 10: On Tool Stack

Most companies believe:
"This new platform does everything. Let's rip out our stack and migrate."

What's wrong:
Rip-and-replace is 18-month death march. Migration hell. Data loss. Team resistance. Vendor lock-in. And at the end, you've swapped one monolith for another.

The cost:

  • 6 months: Planning, vendor selection, contract negotiation
  • 12 months: Data migration, integration building, testing
  • 6 months: Training, adoption, fixing what broke
  • Total: 24 months to get back to where you started (maybe)

During those 24 months:

  • Market moved
  • Competitors scaled
  • You were stuck in migration purgatory

What's true:
Augmentation beats replacement. Best tools integrate with existing stack, don't replace it.

The architecture:

Keep existing tools for what they do well:

  • Salesforce: System of record, deal tracking, reporting
  • HubSpot: Marketing automation, email, forms
  • Outreach: Sales engagement, sequence execution
  • LinkedIn Sales Navigator: Social selling, warm intros

Add intelligence layer on top:

  • Aggregates signals from all sources
  • Scores accounts three-dimensionally
  • Runs ICP experiments
  • Routes to appropriate tool automatically
  • Feeds results back for learning

The tools you have work better because they're fed better inputs.

Example flow:

  1. Unstuck scores account as "High Fit + Decision-Maker + Hot Intent (A)"
  2. Auto-creates Salesforce opportunity (high priority)
  3. Auto-adds to Sales Navigator hot list
  4. Auto-adds to Outreach sequence (hot-intent cadence)
  5. SDR opens dashboard, sees account at top with context
  6. Rep engages, books meeting, updates Salesforce
  7. Outcome feeds back to Unstuck, updates ICP model

Your CRM still stores data. Your sales engagement platform still executes. Your Sales Navigator still surfaces relationships. They just work on better targets now.

Implication:
You need orchestration layer that integrates with stack, not platform that replaces stack.

Infrastructure with one-click exports to Salesforce, HubSpot, Sales Navigator, Outreach, Marketo beats infrastructure that requires you to abandon what's working.

The future: composable stack with intelligent orchestration, not monolithic platform with vendor lock-in.

How These Beliefs Connect

The 10 POVs aren't independent. They're mechanism explaining mechanism:

POV 1-2 (Targeting + Scoring) → Enables POV 3 (Channel Mix)
Can't do progressive investment without multi-dimensional scoring

POV 3-4 (Channel Mix + Sales Capacity) → Enables POV 7 (Handoff)
Can't route intelligently without knowing fit + intent + persona

POV 5-6 (Data Quality + ICP Definition) → Enables POV 8 (Measurement)
Can't measure ICP performance without live data and cohort tracking

POV 9-10 (AI + Tool Stack) → Enables POV 1-8 (Everything Else)
Augmentation + integration makes systematic GTM possible at scale

They form a system. Change one belief, and the others shift.

This is why most companies can't adopt one POV in isolation. You can't do "real-time scoring" if your data refreshes quarterly. You can't do "signal-based routing" if you have single-dimensional scores. You can't do "ICP experiments" if you're replacing your entire stack.

The beliefs are a package. Adopt the system, not the pieces.

What This Means

These aren't aspirational. This is how Unstuck operates.

We built product around these beliefs. Strategic Narrative explains why the world needs this. Core POV explains how the mechanism works. Product is the implementation of mechanism.

When you use Unstuck, you're not adopting software. You're adopting a worldview about how GTM should work.

The worldview:

  • Systematic beats theatrical
  • Measurement beats intuition
  • Compound learning beats annual planning
  • Precision beats volume
  • Science beats religion

That's what we're selling. The tool is just the implementation.