You bought the licenses. Where's the ROI?
You hired the consultant. You bought the licenses. You ran the pilot. You have the dashboards. The needle didn't move the way you were promised.
You're not failing. You're hitting the structural wall that catches almost every SMB in Phase 2. The reason it catches almost everyone is the same reason it's hard to see from the inside: the failure mode is system-shaped, not person-shaped. Replacing your AI lead won't fix it. Buying a different tool won't fix it. Hiring a different consultant might tell you what's wrong but won't fix it either.
You bought tools to make existing work faster. The actual unlock is redesigning the work itself.
This page is what's actually wrong, the research that proves it, and the five moves to get yourself unstuck.
You might be here if…
- You bought licenses or hired someone to lead AI. Maybe ChatGPT Enterprise for everyone. Maybe Copilot rolled out across the org. Maybe a consulting firm engaged for an assessment. Maybe an internal AI lead appointed. Probably some combination of all four.
- At least one pilot didn't deliver what you hoped. It worked technically. It even produced some metrics that look OK on a slide. But the actual P&L impact you were promised hasn't shown up. The team that ran the pilot is quietly unsure whether to call it a success or a learning experience.
- You can't name three workflows AI has measurably improved. The honest answer to "where is AI saving us money or making us money?" is harder to articulate than it should be. You've spent real dollars. You don't have a clear picture of the return.
If most of these don't sound like you, the page you actually want is probably another phase — see the four phases →
The structural reason most AI investments don't pay off
The numbers from the most rigorous research on this are uncomfortable.
McKinsey's 2025 State of AI surveyed 1,491 executives across 101 countries. They found that only 39% of organizations using AI attribute any EBIT impact to it, and among those, most attribute less than 5% of EBIT to AI. Only 6% of organizations qualify as "high performers" capturing disproportionate value.
A separate study from MIT's NANDA initiative — "The GenAI Divide: State of AI in Business 2025" — analyzed 300 publicly disclosed AI deployments and found that despite $30–40 billion in enterprise generative AI investment, 95% of organizations are seeing zero return on those investments.
That MIT number gets misquoted constantly, so be careful with it: it specifically refers to generative AI pilot programs, not all AI investment. And the same report found that AI initiatives bought from specialized vendors succeed about 67% of the time — while internal builds succeed only one-third as often. The story isn't "AI doesn't work." The story is "most pilots are structured wrong."
The pattern that ties these findings together
McKinsey found one practice that distinguishes the 6% high performers from everyone else: workflow redesign. Only 21% of organizations using generative AI have redesigned at least some workflows. Nearly 80% are layering AI on top of existing processes — capturing maybe 10% of the available value. The other 90% requires actually changing the shape of the work.
That's the structural reason your investment hasn't paid off. You bought tools to make existing work faster. The actual unlock is redesigning the work itself, and almost nobody does that because it's harder, slower, and requires real organizational decisions instead of procurement decisions.
The good news: knowing this is the unlock. The five moves below are ordered to get you there step by step, starting with the cheapest moves and working up to the structural one.
What to do this month
Five moves, this month. The first three are diagnostic — they tell you exactly where the gap is between the money you've spent and the value you've captured. The last two are how you close it. Run them in order; don't skip ahead.
Note: move 5 is a quarter-long shift, not a this-month one. We include it because it's where Phase 2 becomes Phase 3.
- 01 free / this week
Audit your AI license utilization
Most Phase 2 companies don't realize they're paying for unused AI licenses. The admin dashboards for ChatGPT Enterprise, Microsoft 365 Copilot, and similar tools all show usage data — but nobody's looking at it.
Pull the active-user count from each AI tool you're paying for. Compare it to the seat count. The gap is your problem (and probably your CFO's bigger problem). Also useful: pull the per-user activity data — you'll quickly see who your power users are and who's never logged in once.
Free resource: AI License Utilization Audit Template (coming soon) - 02 free / this week
Run a pilot postmortem on the one that didn't land
Most failed pilots don't get analyzed; they just quietly fade out. That means you can't learn from them and you can't tell the next pilot's lead what to avoid.
Sit down with whoever ran the pilot and ask four questions: What were we actually trying to prove? What did we expect to happen? What actually happened? What would we do differently? Write the answers down. The pattern across multiple postmortems is your real diagnosis.
Free resource: Pilot Postmortem Framework (coming soon) - 03 low cost / this month
Check whether the AI role you created is set up to win
If you appointed someone internal to lead AI, the question isn't whether they're the right person — it's whether the role they were given is the right shape. The most common pattern: a smart, motivated person gets the title, no real authority, no budget for tools or training, no executive air cover, and a vague mandate to "figure out AI." The role was set up to fail, regardless of who's in it.
Score the role honestly: Does it have a defined scope? A budget? A direct line to an executive sponsor? Authority to redesign processes? Time carved out of the existing job to actually do the work? If three or more answers are no, the issue is structural. Fix the role before you consider replacing anyone.
Free resource: Is Your AI Role Set Up to Win? Scorecard (coming soon) - 04 low cost / this month
Stop layering AI on top of work — start redesigning one workflow
McKinsey's 2025 State of AI report is unambiguous on this: only 21% of organizations using generative AI have redesigned at least some workflows. Nearly 80% are layering AI on top of existing processes. Workflow redesign is the single biggest predictor of EBIT impact from AI.
Pick one workflow you know well. Document the current state — every step, every handoff, every approval. Then ask: "if AI did the steps it can do well, what would this workflow look like?" The answer is usually fewer humans, fewer handoffs, faster cycles, and a different shape entirely. That redesign is where the ROI lives.
Free resource: Workflow Redesign Worksheet (coming soon) - 05 planning / this quarter
Move from "who has a license" to "who has the skills"
License-and-pray is the most common failure mode in Phase 2. You give everyone access; you assume they'll figure it out; most don't. The result: a small percentage of power users get real value, and the rest treat the tools as a slightly fancier search engine.
The fix isn't more tools — it's structured enablement. Identify your power users. Pair them with non-users in the same role. Run monthly demo days where people show what they're actually doing. Make AI competency a real expectation, not just an option. This shift takes a quarter to land but it's the difference between Phase 2 and Phase 3.
Free resource: Demo Day Playbook (coming soon)
A 220-person manufacturer six months into "doing AI"
Here's the pattern, drawn from the Phase 2 SMBs we see most often. A 220-person mid-market manufacturer six months into "doing AI." The CEO assigned the COO to lead it. The COO did what a competent operator would do: hired a consulting firm for a 12-week assessment, bought ChatGPT Enterprise licenses for the management team, kicked off two pilots — one in customer service, one in production scheduling.
The customer-service pilot kind of worked. Time to first response is down roughly 20%. Customer satisfaction is flat. The head of CS is annoyed because the AI sometimes gives wrong answers. The production-scheduling pilot stalled — the data wasn't clean enough. ChatGPT licenses are 30% utilized per the admin dashboard. The CFO has asked twice if they should cancel some.
At the next board meeting, the chair asks "where are we on AI?" The COO says "making progress." Nobody believes it.
The fix is rarely what the company thinks it is. It's not "kill the pilots and start over." It's not "fire the consultant." It's not even "hire a real AI person." It's smaller and more structural: pick one workflow, redesign it, prove it works at small scale, then build the demonstration into a model the rest of the company can copy. The McKinsey 6% high performers all do this. The other 94% don't.
A quarter later, the same company has one production workflow running differently — fewer handoffs, AI doing the steps it does well, humans doing the judgment work. EBIT impact is small in dollar terms but measurable in percentage terms. The CEO has something concrete to show. The CFO stops asking about license cancellations. The pattern is replicable; the next quarter's effort is to apply it to a second workflow. That's the start of Phase 3.
If you'd rather not run this play alone — Phase 2 is one of the most common engagements Sillewa runs, especially for COOs who've already invested in AI and need an outside perspective on what's actually broken. Here's what working with us looks like →
What changes when you move to Phase 3
When the moves above land — when at least one workflow is redesigned, when your power users are identified and supported, when a pilot actually delivers measurable P&L impact — you'll find yourself asking different questions. Not "is this working?" but "how do we accelerate this?"
You'll have early wins to spread, an emerging set of power users, and the beginnings of an "AI champion" function inside your org. Your problem will shift from "the investment isn't paying off" to "the investment is paying off in two places and we can't get it to spread."
That's Phase 3.
Read what to do in Turning the cornerIf you found this useful, also read:
- Credits & sources — the McKinsey, MIT NANDA, and BCG research backing the claims on this page
- More about Sillewa — who we are and how we work
The other phases
Where you actually are matters. If Phase 2 isn't quite right, here's the rest: