The contrarian beliefs that drive our content, attract our audience, and differentiate our brand.
Purpose
These are the hills we die on. Each belief:
- Challenges conventional GTM wisdom
- Generates dozens of content angles
- Attracts ideal customers, repels bad fits
- Shapes how we write, not just what we say
Not generic values ("we believe in customers"). Specific contrarian POVs that create content.
The 7 Core Beliefs
1. Overeducate, Not Oversell
What it means:
Lead with education and mechanism exposure. Teach how GTM works before pitching product. Give away insights that competitors keep proprietary.
Why contrarian:
Conventional wisdom: Gate valuable content, require email for downloads, hide methodology.
Our belief: Best marketing is education that makes buyers smarter, not convinced.
Who it attracts:
- Operators who want to learn, not be sold to
- Technical buyers who value depth over pitch
- GTM leaders tired of surface-level thought leadership
Who it repels:
- Buyers looking for quick silver bullets
- People who want to be told what to buy, not educated
- Competitors who think methodology is moat (it's not)
Content angles this generates:
- Mechanism exposure posts (how qualification actually works)
- Diagnostic labeling (naming broken patterns: "Single-Score Conflation")
- Data drops (sharing our internal experiments/results)
- Deep-dive guides (comprehensive how-to resources)
- Open-source frameworks (progressive GTM investment model)
Format mapping:
- Primary formats: Mechanism Exposure, Diagnostic Labeling, Data Drops
- Supporting formats: Experiments, Build in Public
- Avoid in: Product Announcements (too educational, not promotional enough)
Good examples:
✅ Mechanism Exposure:
"Single-score qualification conflates three dimensions. Here's why that breaks routing: High intent + low fit = wasted ABM budget. High fit + wrong persona = right company, wrong person. Let me show you the math..."
→ Teaches mechanism BEFORE mentioning product
✅ Diagnostic Labeling:
"We call this 'Static List Decay.' Most teams export from Apollo, run campaigns for 30 days, then export fresh list. But 40% of contacts change status monthly. Here's how to measure your decay rate..."
→ Names the problem, gives methodology, doesn't pitch
✅ Data Drop:
"We ran 8 ICP hypotheses simultaneously for 90 days. Here's what we learned: [full data, methodology, spreadsheet download]. 5 failed, 3 succeeded. Killed the losers in week 6. Here's the framework..."
→ Complete transparency, actionable framework
Bad examples:
❌ Too salesy:
"Tired of bad targeting? Unstuck Engine solves this with our revolutionary multi-dimensional scoring!"
→ Pitch, not education. Generic claims, no teaching.
❌ Gated expertise:
"Want to learn our ICP framework? Download our whitepaper! [email required]"
→ Withholding education behind form. Counter to belief.
❌ Surface-level:
"Targeting is important for GTM success. Here are 5 tips..."
→ Obvious insights, no depth, no mechanism exposure
When to use:
- 85% of content (this is our core approach)
- ToF content (educate before they know solutions exist)
- MoF content (educate on evaluation criteria)
- Competitive differentiation (we teach, they pitch)
When NOT to use:
- BoF conversion content (some product focus needed)
- Product announcements (okay to be promotional)
- Customer stories (focus on results, not education)
2. Systematic Beats Theatrical
What it means:
Measurement > intuition. Processes > heroics. Boring compound gains > exciting one-off wins. Show your work, share methodology, kill what doesn't work fast.
Why contrarian:
Conventional wisdom: GTM is art, not science. Trust your gut. Heroic sales reps close impossible deals.
Our belief: Systematic approaches compound. Theatrical approaches don't scale. Science > art.
Who it attracts:
- Operators who value repeatability
- Data-driven GTM leaders
- Teams tired of "guru" advice
- RevOps professionals who want frameworks
Who it repels:
- Sales leaders who believe in "just hustle harder"
- Marketers who rely on creativity over measurement
- "Thought leaders" who sell inspiration not systems
Content angles this generates:
- Experiments with clear methodology
- Framework documentation (SLM, progressive investment)
- Before/after data (show the boring work that drove results)
- Failed experiments (killing what doesn't work)
- Process documentation (how we actually do things)
Format mapping:
- Primary formats: Experiments, Data Drops, Build in Public
- Supporting formats: Framework Docs, Founder Lessons
- Avoid in: Provocations (those can be theatrical by design)
Good examples:
✅ Experiment:
"Hypothesis: High-intent + low-fit prospects waste ABM budget. Test: Ran 2 cohorts for 60 days. Cohort A (intent-only): 500 prospects, 8% conversion. Cohort B (intent + fit): 200 prospects, 24% conversion. Result: Killed intent-only approach. Here's the spreadsheet..."
→ Clear hypothesis, controlled test, kill decision, full data
✅ Build in Public:
"We killed 5 ICP hypotheses this quarter. Here's how: Run 8 simultaneously, measure win rate + cycle time weekly, kill anything below 15% win rate by week 6. ICPs we killed: [list]. Why they failed: [data]. Framework for your own testing: [template]"
→ Shows systematic process, shares failures, provides framework
✅ Framework:
"Our Systematic Leverage Model (SLM): Every Bounty task (exploration) must either become Standard (template) or get killed in 30-60 days. Here's the progression: Bounty → Measure → Kill or Standardize. Our kill rate: 40%. Template..."
→ Documented system, clear progression, honest about kills
Bad examples:
❌ Theatrical:
"We 10× our pipeline in 30 days with ONE WEIRD TRICK! Here's what the gurus won't tell you..."
→ Hype, no methodology, implies magic not systems
❌ Unmeasured:
"We just KNEW this ICP would work. Sometimes you gotta trust your gut."
→ Intuition over measurement. Not replicable.
❌ Hero story:
"Our top AE closed a $500K deal with pure hustle and determination!"
→ Celebrates unrepeatable heroics over systematic approach
When to use:
- Experiments and data drops (always)
- Build in Public posts (show the boring work)
- Framework documentation (systematic processes)
- Founder lessons (what we measured and learned)
When NOT to use:
- Provocations (those are designed to be punchy, not measured)
- Sometimes customer stories (if customer succeeded through process, great; if through luck, still worth sharing)
3. Original Beats Copied
What it means:
Every successful GTM is original. Templates provide starting point, but customization to your unique market is what scales. 0% of billion-dollar companies copied someone else's playbook.
Why contrarian:
Conventional wisdom: Best practices exist. Copy what works. Follow proven playbooks.
Our belief: Copying is ceiling. Originals scale, copies plateau. Templates are training wheels, not destination.
Who it attracts:
- Founders building category-creating companies
- GTM leaders tired of "playbook" advice
- Operators who know their market is unique
- Teams ready to experiment, not copy
Who it repels:
- Teams looking for "just tell me what to do"
- Buyers who want turnkey solutions
- Risk-averse operators who want guarantees
Content angles this generates:
- Anti-template content (why Apollo ICP templates fail)
- Category creation narratives (how originals win)
- Customization frameworks (how to adapt, not adopt)
- Founder lessons (building original vs copying)
- ICP experimentation (portfolio approach vs single template)
Format mapping:
- Primary formats: Provocations, Founder Lessons, Mechanism Exposure
- Supporting formats: Category Reframing, Experiments
- Use sparingly in: How-to guides (can seem contradictory)
Good examples:
✅ Provocation:
"Every 'ICP template' in your database is someone else's hypothesis. ZoomInfo's SaaS ICP doesn't account for your pricing model, your product complexity, or your competitive positioning. Templates are training wheels. Build your own."
→ Challenges conventional tool usage, advocates original thinking
✅ Founder Lesson:
"We started with Clay's enrichment template. Spent 40 hours adapting it. Realized we were building someone else's system with our constraints. Threw it out, built original approach optimized for our data sources. 10× faster, 80% cheaper."
→ Story of moving from copy to original
✅ Mechanism Exposure:
"Why templates fail: They encode someone else's market assumptions. Example: Standard SaaS ICP assumes PLG motion. But if you're enterprise SLG, that template optimizes for wrong buyers. Here's how to build yours from first principles..."
→ Explains WHY originals win, gives methodology
Bad examples:
❌ Contradiction:
"Follow our proven 5-step playbook for GTM success! It works for everyone!"
→ Contradicts belief. Selling template after saying templates don't work.
❌ Elitism:
"Only true innovators succeed. If you're copying, you'll never make it."
→ Discourages vs educates. Elitist tone.
❌ No pathway:
"Templates are bad. Figure it out yourself."
→ Criticizes without offering framework for building original
When to use:
- When discussing ICP/persona customization
- Category creation content
- Positioning against template-based tools
- Founder lessons about experimentation
When NOT to use:
- When providing frameworks (frameworks are reusable, that's okay)
- When sharing best practices (some things ARE universal)
- Customer stories (if they succeeded with template, share it)
4. Multi-Dimensional Beats Single-Score
What it means:
Company Fit + Persona Match + Engagement Level are three independent dimensions. Conflating them into single score (MQL/PQL/SQL) prevents correct routing. Need to measure separately.
Why contrarian:
Conventional wisdom: Simple lead scores work. One number tells you priority. MQL/PQL/SQL is enough.
Our belief: Single scores hide critical information. High score could mean high fit + no intent OR high intent + wrong persona. Can't route if dimensions collapsed.
Why THIS belief:
This is our CORE PRODUCT DIFFERENTIATOR. Most important belief for content.
Who it attracts:
- Technical GTM leaders who understand dimensionality
- RevOps teams struggling with lead scoring
- Data-driven operators
- Teams with complex sales motions
Who it repels:
- Teams wanting "simple" solutions
- Buyers allergic to complexity
- Organizations with very simple, single-play GTM
Content angles this generates:
- Diagnostic labeling ("Single-Score Conflation")
- Mechanism exposure (why MQL scoring breaks)
- Competitive positioning (vs intent-only, fit-only tools)
- Data drops (false positive rates from conflation)
- Framework docs (three-axis qualification)
Format mapping:
- Primary formats: Mechanism Exposure, Diagnostic Labeling, Competitive Content
- Supporting formats: Experiments, Data Drops
- Use heavily in: All ToF/MoF content (core differentiation)
Good examples:
✅ Diagnostic Labeling:
"We call this Single-Score Conflation. Example: Prospect scores 85/100. Is that high fit + medium engagement? High engagement + low fit? Wrong persona + high fit? Can't tell. And you can't route correctly when dimensions are collapsed. High intent + low fit goes to nurture, not ABM. High fit + wrong persona needs different messaging."
→ Names problem, explains implication, shows routing impact
✅ Mechanism Exposure:
"Why HubSpot lead scoring fails: Takes 20 signals (company size, engagement, firmographics) and adds them up. 100 points total. But 60 points from wrong signals (high engagement, wrong industry) = wasted effort. Here's the math: [shows actual scoring logic and routing failure]"
→ Exposes mechanism, shows actual failure, provides alternative
✅ Data Drop:
"We analyzed 10,000 prospects scored by single-score system. 40% of 'high-priority' (80+ score) had high intent + low ICP fit. ABM campaigns on these prospects: 6% conversion. When we separated dimensions and filtered for intent + fit: 28% conversion. Conflation cost: $380K in wasted ABM."
→ Real data, quantified waste, proves thesis
Bad examples:
❌ Too complex:
"Our n-dimensional hypercube approach to prospect qualification leverages eigenvalue decomposition to..."
→ True but inaccessible. Lost the audience.
❌ Vague:
"Lead scoring doesn't work. You need something better."
→ No mechanism exposure, no explanation WHY
❌ Just product pitch:
"Unstuck Engine has multi-dimensional scoring! It's better!"
→ Claim without education. Misses mechanism.
When to use:
- 40% of content (our core differentiation)
- Every competitive comparison
- Most ToF/MoF educational content
- Technical deep dives
- Positioning vs any single-dimension tool
When NOT to use:
- When audience isn't technical enough (will glaze over)
- Customer stories (unless they specifically struggled with this)
- Very early awareness content (too complex for cold audience)
5. Real-Time Beats Static
What it means:
Prospect engagement changes hourly, not daily. Batch processing (overnight list refreshes) misses opportunities. Webhook-based continuous processing captures signal when it matters.
Why contrarian:
Conventional wisdom: Daily/weekly list updates are fine. Overnight batch processing works.
Our belief: Markets move in real-time. Static lists decay instantly. Speed matters.
Who it attracts:
- Technical operators who understand real-time systems
- GTM leaders frustrated with stale data
- Teams running high-velocity sales motions
- RevOps professionals
Who it repels:
- Teams with long sales cycles (less urgency)
- Buyers who don't value speed
- Organizations happy with weekly refreshes
Content angles this generates:
- List decay analysis (how fast contacts go stale)
- Speed-to-lead content (why minutes matter)
- Technical architecture content (webhooks vs batch)
- Competitive positioning (vs static databases)
- Opportunity cost analysis (missed deals from delays)
Format mapping:
- Primary formats: Mechanism Exposure, Data Drops, Competitive Content
- Supporting formats: Technical Deep Dives, Experiments
Good examples:
✅ Data Drop:
"We tracked 5,000 Apollo contacts for 30 days. Day 1: 100% accurate. Day 30: 60% had changed (job change, company change, role change). List decay rate: 1.3% daily. If you export monthly and run sequences all month, last week's prospects are 40% stale. Here's the calculator..."
→ Quantifies decay, provides tool, shows impact
✅ Mechanism Exposure:
"Why overnight batch processing fails: Prospect shows high intent Monday 10am (visits pricing page 3x). Batch processes overnight. Gets scored Tuesday morning. Enters sequence Tuesday afternoon. But competitor called Monday at 2pm. Real-time: Scored at 10:15am, in sequence by 11am, called by 2pm."
→ Shows actual timing impact with concrete example
✅ Competitive:
"Demandbase refreshes intent scores daily. 6sense refreshes every 4-6 hours. Unstuck processes signals in real-time (webhook-based). Why it matters: A-stage prospect identified at 10am, SDR notified by 10:05am, call by 10:30am. Not tomorrow. Today."
→ Direct comparison with timing specifics
Bad examples:
❌ Hyperbole:
"Real-time or die! Batch processing is DEAD! Anyone using overnight refreshes is LOSING!"
→ Overstated, not substantiated
❌ Too technical:
"Our event-driven architecture leverages Apache Kafka message queues with sub-second latency distributed across microservices..."
→ True but alienates non-technical audience
❌ No proof:
"Real-time is better because it's faster."
→ Circular logic, no evidence
When to use:
- Technical differentiation content
- Speed-to-lead analysis
- Competitive positioning vs databases
- Data drops on list decay
When NOT to use:
- When audience has long sales cycles (not relevant)
- Early awareness content (too in-the-weeds)
- Customer stories (unless speed was critical)
6. Amplification Beats Addition
What it means:
AI should amplify human judgment (seller chooses who to call, AI finds best time/angle), not replace it (AI calls everyone autonomously). 100× seller effectiveness > add 100 AI agents.
Why contrarian:
Conventional wisdom: AI SDRs will replace human sellers. More capacity = more revenue.
Our belief: AI amplifies experts, doesn't replace them. Quality > quantity. Intelligence > automation.
Who it attracts:
- Sales leaders who value craft
- GTM teams focused on quality over volume
- Operators skeptical of "AI will do everything"
- People tired of AI hype
Who it repels:
- Buyers looking for "zero human" solutions
- Teams optimizing purely for capacity
- AI maximalists who believe full replacement is answer
Content angles this generates:
- AI positioning (amplification vs replacement)
- Human + AI collaboration content
- Quality vs quantity arguments
- Competitive positioning (vs AI SDRs)
- Future of sales content
Format mapping:
- Primary formats: Provocations, Mechanism Exposure, Competitive Content
- Supporting formats: Founder Lessons, Category Reframing
Good examples:
✅ Provocation:
"AI SDRs send 10,000 emails. Human SDR with AI sends 100 emails to right people at right time with right message. Which drives more pipeline? Volume is vanity metric. Intelligence is how you win."
→ Challenges automation narrative, reframes success metric
✅ Mechanism Exposure:
"Why AI replacement fails: AI can personalize messages, but can't judge strategic fit. Example: AI SDR sees 'high intent' signal, sends message. Human sees high intent + wrong ICP (bad fit), doesn't waste effort. AI optimizes local maximum (send more), human optimizes global maximum (send to right people)."
→ Explains WHY amplification beats replacement
✅ Competitive:
"11x, Artisan, AltaHQ: AI autonomous outreach. Unstuck Engine: AI-powered targeting for human execution. They optimize send volume. We optimize prospect quality. Different philosophy, different outcomes. You choose: 10,000 bad prospects or 100 perfect ones?"
→ Clear philosophical contrast, forces choice
Bad examples:
❌ Anti-AI:
"AI is overhyped! Humans will always be better! AI SDRs are garbage!"
→ Dismissive, not nuanced. We use AI, just differently.
❌ Unclear:
"We believe in human-AI collaboration."
→ Generic, everyone says this
❌ Contradictory:
"AI amplifies humans... and our AI does everything for you!"
→ Mixed message
When to use:
- Positioning vs AI SDRs
- Future of GTM content
- Provocations about automation
- Quality vs quantity arguments
When NOT to use:
- When discussing our own AI features (might seem hypocritical)
- Early awareness content (confusing positioning)
7. Progressive Investment Beats Equal Treatment
What it means:
ACV × Engagement × Fit determines channel mix. High-value hot prospects get ABM. Low-value cold prospects get performance ads. Don't treat all prospects equally.
Why contrarian:
Conventional wisdom: Run same play for all prospects in segment. ABM for all target accounts.
Our belief: Resource allocation should match opportunity value. Progressive routing = efficiency.
Who it attracts:
- CFOs / efficiency-minded leaders
- GTM leaders with budget constraints
- Performance-focused operators
- Teams with wide ACV ranges
Who it repels:
- Teams with narrow ACV ranges (less relevant)
- Organizations with unlimited budgets (rare)
- Simple, single-play GTM motions
Content angles this generates:
- ROI / efficiency content
- Budget allocation frameworks
- Channel mix strategies
- Competitive positioning (vs one-size-fits-all)
- CFO-focused content
Format mapping:
- Primary formats: Framework Docs, Mechanism Exposure, Executive Content
- Supporting formats: Data Drops, Experiments
Good examples:
✅ Framework:
"The Progressive GTM Investment Model: [visual matrix]. A-stage Enterprise (high ACV + high engagement) = ABM + gifting + calls. C-stage SMB (low ACV + medium engagement) = email sequence. E-stage (cold) = performance ads. Result: 45% lower CAC, 85% more pipeline. Here's the calculator..."
→ Clear framework, quantified outcomes, actionable tool
✅ Mechanism Exposure:
"Why equal treatment fails: Running ABM on 1,000 accounts costs $500/account = $500K. But only 100 are A-stage (ready to buy). Other 900: $450K wasted on prospects who need nurture, not ABM. Progressive routing: $50K on ABM (100 accounts), $50K on nurture (400 accounts), $20K on ads (500 accounts). Same results, 76% cost reduction."
→ Shows waste from equal treatment, proves progressive model
✅ Executive Content:
"CFOs: Your GTM team is burning budget on low-probability prospects. Here's the audit: % of prospects getting highest-touch treatment × % that convert = efficiency score. <20%? You're overspending. Here's how to audit your mix: [template]"
→ Executive language, ROI focus, actionable framework
Bad examples:
❌ Callous:
"Low-value prospects don't deserve attention. Only focus on whales."
→ Tone-deaf, dismissive. Progressive ≠ ignoring.
❌ Complex:
"Our n-tier stratification leverages propensity modeling to dynamically allocate cross-channel resources based on Bayesian probability distributions..."
→ Inaccessible to target audience (executives)
❌ No proof:
"You should probably segment your prospects differently."
→ Vague, no framework, no evidence
When to use:
- ROI / efficiency content
- Executive / CFO audience
- Budget optimization arguments
- Competitive vs one-size-fits-all approaches
When NOT to use:
- Early awareness (too complex for cold audience)
- Content aimed at individual contributors (not their decision)
Belief Application Matrix
Quick reference: Which beliefs for which content types?
Weaving Beliefs Into Content
The 3 Levels
Level 1: Lead with Belief (Provocation, Founder Lessons)
Open with contrarian statement, explain why, provide evidence.
Example: "Everyone copies ICP templates. Here's why that fails: [Original Beats Copied belief]"
Level 2: Embed in Mechanism (Mechanism Exposure, Experiments)
Teach the mechanism, belief implicit in how you teach.
Example: "Here's how qualification works [teaches multi-dimensional approach without saying 'we believe']"
Level 3: Show Don't Tell (Data Drops, Case Studies)
Let data demonstrate belief without stating it.
Example: "We tested [systematic approach]. Results: [data shows systematic beat theatrical]"
Dos and Don'ts
Do:
- ✅ Show belief through actions/data
- ✅ Use beliefs to filter topics (write about what we believe)
- ✅ Let beliefs shape tone (overeducate = generous, detailed)
- ✅ Mix levels (not every piece needs explicit belief statement)
Don't:
- ❌ State all beliefs in every piece
- ❌ Preach ("You MUST believe this!")
- ❌ Contradict beliefs for engagement
- ❌ Use beliefs as product pitch ("Believe in multi-dimensional? Buy Unstuck!")
Belief Conflicts & Tensions
When Beliefs Seem to Conflict
"Overeducate" vs "Systematic"
Education can seem theatrical (big insights, exciting). Systematic is boring.
→ Resolution: Educate on boring systems. Make systematic exciting through clear explanations.
"Original" vs "Frameworks"
We say "don't copy templates" but provide frameworks.
→ Resolution: Frameworks are starting points for customization, not copy-paste solutions. Always include "adapt to your context."
"Amplification" vs "AI Features"
We criticize AI replacement but use AI.
→ Resolution: We use AI for intelligence (scoring, pattern recognition), not replacement (autonomous outreach). Clarify distinction.
How to Handle
- Acknowledge tension explicitly
- Explain the nuance
- Show how both are true in different contexts
- Don't oversimplify for consistency
Brand Voice Through Beliefs
Beliefs shape our voice:
From "Overeducate":
- Generous with insights
- Detailed, thorough
- Sharing, not gatekeeping
From "Systematic":
- Data-first
- Process-oriented
- Show your work
From "Original":
- Contrarian takes
- Challenge conventions
- Don't cite best practices uncritically
From "Multi-Dimensional":
- Nuanced, not binary
- Precision over simplification
- "It depends" is valid answer
See Link Broken for full voice guidelines.
Cross-References
For application:
- Link Broken - How beliefs form our story
- Link Broken - GTM-specific beliefs (subset of these broader beliefs)
- Link Broken - How beliefs shape tone and style
- Link Broken - Which beliefs for which formats
For context:
- Link Broken - How beliefs shape company culture (broader application)
- Link Broken - How beliefs differentiate us in market
