Moat.
Speed.
Allocation.
The platform moat that survives 2028 is being chosen this year.
This briefing tells you which platform moats survive 2028 in specialist methodology firms as AI rewrites build economics. Read it before your R&D allocation locks for the decade.
The 10-page briefing. Worth 20 minutes.
One email. One PDF. Worth twenty minutes of your week.
We send it once. Work emails only.
Monday 9:15, product review. Your VP Engineering opens the quarterly. Adaptive-path v3 shipped two weeks ago. Cohort NPS flat. Your phone buzzes. The founder forwarded a Sana Labs deck at 7am: "thoughts before the board meeting?" Your Head of Content has a brief on his desk: "AI-Enhanced Lean v2.0, Platform Roadmap Implications." Your CFO forwarded the BTS Group EBITA line: "Worth discussing before Q2 planning."
You are not running one R&D function. You are running two, and only one is on your scorecard. One ships what the platform already delivers. The other builds what has to exist by 2028: the per-client customisation engine, the tacit-judgment graph your seniors carry in their heads, the integration depth with your top client systems no AI-native can replicate without your client base.
The moat that matters in 2028 is not the methodology your platform delivers. It is the layer your platform does underneath the methodology.
This is the question your CEO-founder is already circling. The briefing below is what you want in your hand before the next product review.
Build Velocity. Product Defensibility. R&D Capital Allocation.
Three questions every methodology-firm CTO is tracking. Product Defensibility is the crux. The first and third are how you earn the right to answer it.
Is our platform speed shipping production-grade capability, or adoption-dashboard theatre?
Five engineers now prototype three AI-native delivery experiences in parallel in a sprint. Spotify spent four years on the platform foundation that lets agents ship cleanly. You cannot replicate four years in eighteen months. Buy what the tool layer can carry. Own the review discipline that makes it ship.
What does our platform do that a well-prompted GPT cannot replicate without our client base?
The methodology content itself commoditises fastest because it was codified by design. Three candidates compound underneath: a per-client customisation engine, a tacit-judgment knowledge graph, and integration depth with your top hundred client systems.
Is our R&D envelope funding the better CD-ROM, or the product the CD-ROM cannot ship?
Plateau capital funds next-gen methodology content, certification-catalogue expansion, and LMS feature parity. Compounding capital builds the judgment-amplification platform. On one hurdle rate the first wins every quarter. On one scorecard the second does not exist.
What you get when you download
An 11-page report for CTOs, CPOs, and Heads of Platform at mid-market European specialist methodology firms. Designed to be read in one sitting before your next product review.
Your industry, your platform, and why they are one problem
What is happening to specialist methodology firms: BTS-shape margin compression, clients accessing comparable frameworks through their own AI tools, Sana Labs and the AI-native learning-platform wave quoting at a fraction of your per-cohort fee. What is happening inside your platform: release velocity up, review discipline drifting, junior bench quietly collapsed, and the board AI-strategy list your seat is not on. And the intersection: same force, two altitudes, one problem.
Four moves across build engine, platform and data, product thesis, and R&D bench
Instrument review depth per content module, not just cycle time, and make eval harnesses first-class infrastructure. Build one of three eighteen-month moats underneath the methodology layer: per-client customisation engine, tacit-judgment graph, or integration depth. Stand one judgment-tier price line on protected P&L. Rebuild the junior pathway around senior and agent pairing on judgment-layer infrastructure.
Five questions for your next product review
Is your R&D envelope one instrument or two, and what is the kill criterion on each? Name the AI-native learning platform in your category. How many months to reconstruct the tacit judgment if your two best learning engineers leave for Sana Labs tomorrow? Where did the freed hours from three-experiences-in-parallel go? Is your Q1 boundary agreement with the CEO-founder written, or waiting until after the Head of AI Content shortlist?
Calibrated for each seat at the table.