Most Organizations Are Not Doing AI. They're Buying Copilots.
- Clayton Dendy

- Mar 24
- 10 min read
Updated: Mar 26
Eighty-five percent of Fortune 500 companies now mention AI in their annual filings. That sounds like progress. But a recent PwC survey tells a different story: only 12% of CEOs say AI is delivering both cost savings and revenue benefits. The rest are somewhere between “we bought Copilot licenses” and “we’re figuring it out.”
That gap is not surprising. It tracks with what we see across our client base at Terra Dygital. McKinsey’s State of AI in 2025 report, published November 2025, puts a finer point on it: 88% of organizations now use AI in at least one function, but only about one-third have begun to scale, and just 6% report that AI is driving meaningful bottom-line impact. Most companies have taken a first step with AI. Very few have taken the second one. And the distance between those two steps is not a technology gap. It is a strategy, data, and leadership gap.

Three Levels of AI Maturity
A recent CIO.com piece by Dan Roberts laid out a framework for understanding where organizations sit on the AI maturity curve. We have adapted it here because it mirrors what we see in practice across mining, financial services, and technology companies.
Level 1: Copilot Mode
The organization has deployed AI tools. Employees are using Copilot, ChatGPT, or similar products for daily tasks. Leadership can tell the board “we’re using AI.” But there is no formal strategy behind it, no way to measure outcomes, and no coordination across departments. Adoption is organic and unmanaged. In a lot of cases, employees are experimenting on their own, creating shadow AI usage that IT and security teams have no visibility into. That is not a small problem. When people bring unapproved AI tools into the workflow and feed them company data, you have a governance gap that compounds over time.
Level 2: Outcomes-Driven
AI initiatives are tied to specific business outcomes. There is a secured budget, board-level support, and a roadmap that people actually follow. Use cases have been identified, piloted, and measured. Data quality is treated as a prerequisite, not something you will get to eventually. The organization knows what AI is doing for them and can put a number on it.
Level 3: AI-First
AI is embedded in how the organization operates. Not as a layer bolted on top of existing processes, but as a core part of decision-making, service delivery, and competitive positioning. The CEO is actively driving the agenda. Very few organizations are here today.
Most of the organizations we work with are solidly in Level 1. That is not a criticism. For mid-market companies in 2026, it is the norm. The question worth asking is not “are we behind?” but rather “are we stuck without knowing it?”
Why Experimentation Feels Like Progress (But Isn’t)
Here is the pattern we see most often. An organization rolls out Copilot across Microsoft 365. Adoption picks up. Someone builds an internal chatbot. A team runs a proof of concept. Leadership reports to the board that the company is “doing AI.”
Then nothing changes structurally. Nobody has mapped which processes are candidates for automation versus augmentation. Nobody has checked whether the data feeding those tools is clean, governed, or even accessible. Nobody has defined what success looks like beyond “people seem to be using it.”
Afshean Talasaz, former SVP and chief technology officer at Colonial Pipeline, put it sharply in the CIO.com piece:
Getting your feet wet is not preparation for scale. If I do not know how to swim, stepping into the ocean is not going to prepare me to go swimming far from shore.
This is the trap. Organizations equate experimentation with readiness. They assume the hard part is behind them and that scale is just a matter of time and budget. It is not. Scaling AI requires deliberate groundwork that most Level 1 organizations have not even started: data cleanup, governance frameworks, process redesign, change management, and genuine executive alignment on what “success” actually means.
The Uncomfortable Truth About Your Data
Every AI conversation eventually becomes a data conversation. The organizations that skip this step pay for it later, usually with interest.
Consider the Edward Jones example from the CIO.com piece. The financial services firm spent 18 months cleaning and restructuring their data before they could unlock meaningful AI value. Eighteen months of taxonomy work, data quality initiatives, and migration from a decades-old legacy mainframe system to Salesforce. That is not the kind of headline anyone wants to write. But it is the work that made everything after it possible.
Edward Jones processes over half a million client conversations per week. When that data was cleaned and curated, it became a competitive advantage, enabling insights and personalization at a scale that would have been impossible before. Uncleaned, it was a liability that produced unreliable outputs and eroded trust.
This pattern holds across every industry we work in. Organizations that layer AI on top of inconsistent, siloed, or poorly governed data get inconsistent, siloed, and poorly governed outputs. The AI does not fix the data problem. It amplifies it.
If your data is not treated as a strategic asset before you deploy AI, it will not become one after.
We regularly encounter organizations where SharePoint and OneDrive structures have grown organically for years with no consistent taxonomy or retention policy. When Copilot is deployed on top of that environment, it surfaces stale files, misclassified documents, and outdated procedures alongside current ones. The AI is working exactly as designed. The data underneath is simply not ready for it. This is also where cybersecurity enters the picture. Shadow AI, ungoverned data, and inconsistent access controls create a risk surface that grows quietly in the background. For a deeper look at how these risks compound, see our piece on how cybersecurity services companies protect your business data.
What Has to Change to Reach Level 2
Moving from copilot mode to an outcomes-driven AI program is not primarily a technology investment. It is an organizational shift. Based on what we see working, and what we see failing, here are the five areas that matter most.
1. Treat Data Quality as Infrastructure, Not a Project
Data cleanup is not a one-and-done initiative you can check off a list. It is an ongoing operational commitment, the same way patching and backups are. Build data quality into your daily processes: validation at the point of entry, regular audits, defined ownership, and retention policies that people actually enforce. If your AI tools are producing inconsistent results, the first place to look is not the model. It is the data feeding it.
2. Tie Every AI Initiative to a Measurable Business Outcome
“We want to use AI” is not a strategy. “We want to reduce client onboarding time by 40% using AI-assisted document processing” is. Every AI initiative should have a defined outcome, a timeline, and an owner. If you cannot articulate what success looks like in terms the business cares about, you are not ready to invest. This sounds obvious, but you would be surprised how many AI pilots we see running with no success metric attached to them at all.
3. Get the CEO in the Room
Talasaz is direct on this point: a highly supportive CEO is not an attribute of success; it is a requirement. You can get small operational wins without executive sponsorship. You will not get enterprise-scale transformation. AI strategy needs to be a standing agenda item at the leadership level, not a quarterly update buried in the IT section of a board deck. When the CEO treats AI as someone else’s initiative, the rest of the organization takes the cue. For organizations that do not have a full-time CIO driving this conversation, a virtual CIO (vCIO) advisory model can provide the strategic IT leadership needed to align AI efforts with business priorities and get the right people in the room.
4. Balance Quick Wins with Long-Term Foundation
Zar Toolan, former chief data and AI officer at Edward Jones, describes this as the Run-Grow-Transform model. You need to show operational efficiency gains now (Run), leverage data for growth opportunities (Grow), and simultaneously build the foundational architecture for the future (Transform). Organizations that only chase quick wins never build the foundation. Organizations that only build foundations lose stakeholder patience. The discipline is doing both at the same time, and being transparent with your board about why both matter.
5. Reduce Internal Uncertainty Before It Becomes Resistance
AI transformation creates anxiety at every level of the organization. Employees worry about displacement. Middle management worries about relevance. Leadership worries about ROI. The antidote is not cheerful all-hands presentations or vague reassurances. It is detailed, transparent communication about what is changing, why, and how it affects each group specifically.
Talasaz frames this as managing internal VUCA: volatility, uncertainty, complexity, and ambiguity. The external environment is already creating plenty of VUCA. The last thing leaders should do is pile more on internally through vague strategy, shifting priorities, or lack of follow-through. Preparation reduces VUCA. Detail reduces VUCA. Consistency reduces VUCA. When people know what is happening and why, they stop guessing and start executing.
Details Are Not a Distraction. They Are the Strategy.
There is a tendency in executive conversations to stay at the macro level. Vision. Strategy. Roadmap. Those absolutely matter. But as Talasaz observes, the most important thing to scale is detail. You cannot scale without getting the details right.
We see this play out regularly. An organization has a strong AI vision and a compelling boardroom narrative. But nobody has mapped the data flows. Nobody has defined which processes are candidates for automation versus augmentation. Nobody has assessed whether the current infrastructure can support the workloads they are planning for. The vision is sound. The execution plan simply does not exist.
C-level leaders do not need to do the detail work themselves. But they need to understand it well enough to ask the right questions, allocate the right resources, and remove the right blockers. In 2026, business strategy and technical execution are not separate conversations anymore. Leaders who continue to treat them as separate will find their AI strategies stalling at Level 1, no matter how much budget they throw at the problem.
Three Things to Do This Quarter
Assess Your Data Readiness
Before expanding any AI initiatives, audit the data they depend on. Is it clean? Is it governed? Is ownership clear? If you deployed Copilot across your M365 environment, does your SharePoint and OneDrive structure support the kind of retrieval AI needs, or is it surfacing stale files and misclassified documents? A data readiness assessment is the single highest-value activity a Level 1 organization can undertake. Everything else builds on it. Whether you run this assessment internally or bring in outside support, the key is to treat it as a structured IT governance exercise, not a one-off cleanup project. This is exactly the kind of work that fits under strategic IT leadership, whether that comes from your CIO, a fractional CIO, or a vCIO engagement. Someone needs to own the process, define what good looks like, and hold the organization accountable to it.
Define Two or Three Outcome-Based Use Cases
Pick specific business problems where AI can deliver measurable value. Scope them tightly. Assign an owner and a success metric. Run them as structured pilots with defined timelines, not open-ended experiments that quietly fade out. The goal is not to prove that AI works. Everyone already knows AI works. The goal is to prove that your organization can execute an AI initiative from planning through measurement. Whether your team drives this internally or works with an outside advisor, the discipline is the same: define the outcome before you pick the tool.
Have the Strategy Conversation
If AI has not been discussed at the board or executive level with a specific strategy and budget attached to it, that is the first gap to close. This does not need to be a 50-page plan. It needs to be an honest assessment of where you are, where you want to be, and what it will realistically take to get there. Some organizations can facilitate this conversation on their own. Others benefit from bringing in a strategic IT partner or fractional CIO to help frame the discussion and connect technical realities to business priorities. Either way, that conversation, more than any technology purchase, is what separates Level 1 from Level 2.
Frequently Asked Questions
We already deployed Copilot. Doesn’t that mean we’re doing AI?
You are using AI tools, and that is a perfectly good starting point. But tool deployment is not the same as strategy. The real question is whether those tools are tied to measurable business outcomes, supported by clean and governed data, and producing results you can actually quantify. If you cannot answer those questions clearly, you are at Level 1. That is a common and reasonable place to be. But being honest about the gaps is the only way to close them.
How long does it take to move from Level 1 to Level 2?
It depends on your data maturity, organizational alignment, and the complexity of the use cases you are targeting. Edward Jones spent 18 months on data preparation alone. For most mid-market organizations, a realistic timeline is 12 to 24 months of focused effort. The key word there is focused. Scattered pilots and underfunded initiatives will not get you there. Organizations that try to shortcut the data and governance work typically end up restarting later at greater cost and with less organizational patience.
Do we need a Chief AI Officer?
Not necessarily. What you need is clear ownership and accountability. For most mid-market organizations, the CIO or CISO can own the AI governance and strategy function with support from business leadership. Adding a new title to the org chart does not solve the problem if the underlying organizational alignment is missing. What matters is that someone specific is responsible for connecting AI initiatives to business outcomes, and that person has the authority to make decisions across departments.
What role does cybersecurity play in AI maturity?
A bigger one than most organizations realize. AI governance, data classification, acceptable use policies, vendor risk management for AI tools, and security controls around AI workflows all intersect with the cybersecurity and compliance function. Organizations that treat AI as purely an IT or innovation initiative, without involving security and risk, are building on a foundation that will not hold. Shadow AI is a particularly sharp risk for Level 1 organizations. When employees use unapproved tools with company data and nobody is tracking it, you have a compliance and security exposure that only grows over time.
Our board is asking about AI ROI. What should we tell them?
Be honest about where you are on the maturity curve and present a clear path forward. Boards respond well to candor paired with a plan. If you are at Level 1, say so, and lay out a roadmap for reaching Level 2 with defined milestones, timelines, and investment requirements. Skip the vague promises about AI’s transformative potential. Instead, point to two or three specific use cases where you can demonstrate measurable value within six to twelve months, while being transparent about the foundational data and governance work that will enable broader scale over 18 to 24 months. The worst thing you can do is overpromise and underdeliver. The second worst thing is to stay quiet and hope nobody asks.


