Strategy

Lead, Follow, or GTFO: The AI-Native Mandate

The 'lead, follow, or get out of the way' framework isn't just a suggestion for AI-native teams—it's an existential mandate.

Lead, Follow, or GTFO: The AI-Native Mandate

TL;DR: In AI-native teams, the cost of straddling decisions is exponentially higher. Here's how to spot toxic back-benching, build real alignment, and know when stepping aside is the strongest move.

Mark Suster's "Lead, Follow, or Get the F*ck Out of the Way" is more than a classic framework; for AI-native organizations, it's a mandate. When velocity and alignment are existential, Suster's principles become the only rule that matters. The cost of indecision is no longer linear—it's exponential. Let's unpack why and how to operationalize this in modern tech organizations.

Indecision is an exponential tax in AI teams

We've all seen the institutional rot: middle-managers running interference, endless consensus meetings, resources scattered to appease every ego. In traditional B2B, this is a painful tax. With AI, that old-school indecision doesn't just slow you down—it buries you. Here's why:

  • Velocity Tax: AI development cycles move at warp speed. When teams hedge or straddle decisions, they're not just losing linear time—they're missing exponential learning opportunities.

  • Data Debt: Unclear direction leads to inconsistent data collection and model training. This isn't just a temporary setback; it creates compounding technical debt that becomes harder to unwind.

  • Alignment Entropy: In traditional teams, misalignment might slow progress. In AI teams, it can invalidate entire development paths. When different groups interpret AI capabilities or ethical boundaries differently, the waste is massive.

The line has moved: Dissent vs. toxic back-benching

The line between healthy skepticism and toxic back-benching is finer and more dangerous in an AI-native context. Here's the difference:

Healthy dissent looks like:

  • Bringing data, not just doubts
  • Offering alternative implementations, not just critiques
  • Being willing to be proven wrong (and admitting it)
  • Disagree and commit

Toxic back-benching looks like:

  • "That will never work" without specifics
  • Passive resistance to implementation
  • Selective pessimism or passive acceptance of the plan without buy-in
  • Undermining without offering alternatives

Operationalizing alignment: An AI-native playbook

I recently worked with a prospect who was trying to drive AI adoption within their organization. They showed me an impressively detailed document outlining DRIs, decision-making frameworks, and the technologies under evaluation. In the old world, this was the gold standard of strategic planning.

But in an era where new models are released weekly, this kind of structured planning isn't just slow—it's a liability. You can't afford to spend cycles on meta-organization when the technology itself is evolving under your feet. The AI-native playbook, at least for now, is radically simpler: get things done.

That prospect's document shouldn't have been a treatise on process. It should have been a single page stating:

  • Hypothesis: Here's what we believe is possible.
  • Test: Here's how we're going to test it this week.
  • Deadline: We'll have an answer by Friday.

The moment you start building RACI matrices for an AI project, you've already lost. You're spending more time organizing the work than doing the work. You're debating process instead of testing hypotheses.

And in this market, that's a fatal mistake.

The power of stepping aside

In AI-native organizations, knowing when to step aside isn't about reducing friction—it's about enabling breakthrough velocity. It's a strategic move, not a sign of weakness. Consider it when:

  1. When you're the experience anchor

    • Your traditional tech experience is making you reflexively skeptical
    • You find yourself saying "but we've always..." too often
  2. When you're the velocity bottleneck

    • Your approval or review process is slowing AI iterations
    • Teams are working around you to make progress
  3. When you're the alignment tax

    • Your need for consensus is creating decision paralysis
    • Your questions are masking resistance to change

Putting it into practice

Start with these tactical steps:

  1. Audit Your Stance

    • Are you leading, following, or neither on key AI initiatives?
    • What's your "dissent footprint" in meetings and docs?
    • Are you comfortable operating without DRI, RACI, and other traditional project management frameworks?
  2. Check Your Alignment

    • Can you articulate the AI strategy clearly?
    • Are you a believer or do you need to be convinced?
  3. Make a Choice

    • Lead: Commit to leading a key AI initiative. Remember, practice makes perfect and we're all learning.
    • Follow: Actively align and support the direction by providing resources, visibility, and political capital (especially when initiatives fail)
    • GTFO: Make space for those who will explore. This can be as simple as taking on a task from someone else, removing barriers, approving budget, respecting calendar placeholders set for exploration. This is one of the hardest yet most important. To truly get out of the way, you'll have to let them cook even if that is uncomfortable.

The bottom line

The cost of organizational friction in AI-native teams isn't just slower progress, it's missed opportunities that compound over time. Whether you're leading AI initiatives, following a clear vision, or realizing you need to step aside, the key is making that choice explicitly and owning it completely.

The worst position? Straddling the middle, creating subtle friction, and slowly bleeding organizational momentum. In the AI era, that's not just inefficient—it's existential.


Related Reading: