Don't Throw Technology at a Problem — Process and Change Management Come First
BLUF/Summary
The most expensive technology failures I've seen weren't caused by bad technology. They were caused by organizations deploying capable tools on top of undefined processes, without investing in the change management needed to get people to actually use them. The pattern is consistent: a leader identifies a problem, buys a platform to solve it, deploys it; and six months later, the tool is shelfware, the team is back to the old way, and someone is asking for budget to try a different tool. The fix isn't better technology selection. It's doing the work that should happen before the technology arrives: defining the process the tool will support, preparing the people who will use it, and designing the rollout to survive contact with reality.
The Pattern
Early in my career building IT and process infrastructure for a company that scaled to over 500 employees, I watched this cycle play out enough times to recognize it as a pattern — and eventually, to design a system that prevented it.
A department leader comes to the IT team with a problem. "Our project tracking is a mess. We need a project management tool." Or: "Our onboarding takes too long. We need an HR platform." Or: "We keep getting security findings. We need a vulnerability management system."
The instinct is to start evaluating tools. Who are the vendors? What are the features? How much does it cost? What do the reviews say? The team spends weeks or months on a selection process, picks the best tool, buys the license, deploys it — and then discovers that the real problems were never about the tool.
The project tracking was a mess because nobody had defined what information a project manager is expected to track, at what cadence, or in what format. The new tool gave them a better place to not do the thing they weren't doing before.
The onboarding took too long because the process had never been mapped end to end. Nobody knew which of the seven steps across three departments were sequential versus parallel, or who owned each handoff. The new platform automated a broken process faster.
The security findings kept appearing because the recurring compliance activities — quarterly access reviews, monthly vulnerability scans, annual policy updates — depended on someone remembering to do them. The vulnerability management tool found more problems more efficiently, but nobody had built the operational rhythm to resolve what it found.
In every case, the technology worked exactly as advertised. The organization wasn't ready for it.
Why Technology Feels Like the Answer
Before going further, I want to acknowledge why this pattern is so persistent. It's not because leaders are careless. It's because technology is tangible and process is not.
When a leader has a problem, "buy a tool" is a concrete action with a clear timeline. You can put it on a roadmap. You can measure progress: vendor selected, contract signed, system deployed, users provisioned. Each milestone feels like forward motion.
"Define the process this tool will support" is harder to schedule, harder to measure, and harder to feel good about. It involves workshops, disagreements, edge cases, documentation, and the unglamorous work of getting people to agree on how something should actually work. It doesn't have a vendor. It doesn't have a demo. It doesn't have a launch date that you can announce to the leadership team.
So, the process work gets skipped, not intentionally, but because the technology work is more legible, more fundable, and more satisfying to report on. The result is a beautifully implemented tool sitting on top of an undefined process, used by people who were never prepared for the change.
The Three Things That Come Before the Tool
After watching this pattern enough times, I started requiring three things before any significant technology deployment. They weren't optional, and they weren't afterthoughts. They were prerequisites.
1. Define the process the tool will support
Before you select a technology, you need to be able to describe — on paper, in plain language — the process it's going to enable. Not at the abstract level of "we need better project tracking," but at the operational level: What triggers the process? Who initiates it? What information is captured at each step? Who approves what? Where do handoffs happen? What does "done" look like?
If you can't write this down before you buy the tool, the tool won't fix the problem. It'll just give the problem a new address.
This doesn't mean you need perfect, exhaustive documentation. A one-page process flow showing the steps, owners, and decision points is often enough. The goal is to force the team to confront the question they've been avoiding: "How does this actually work?" If the answer is "it depends" or "different people do it differently," that's the problem to solve — and it's a process problem, not a technology problem.
At scale, I maintained process documentation for every major workflow — from employee onboarding (a multi-step sequence across IT, HR, security, and the hiring manager) to system access requests to change management to incident response. Each process had defined steps, defined owners, checklists where appropriate, and templates for the outputs. When a new tool was deployed, it was deployed into a defined process. The tool accelerated the process. It didn't invent it.
2. Design the change management approach
Most technology deployments fail at adoption, not implementation. The system is live. The users don't use it. Or they use it wrong. Or they use it for a month and drift back to email and spreadsheets.
Change management is the discipline of preparing people for a change before it arrives, supporting them through the transition, and reinforcing the new behavior until it becomes the default. It's not a memo. It's a plan.
For any deployment that touched more than a handful of people, I required a written change management and communication approach before approving the project. That approach had to address several things: How will we communicate what's changing, to whom, through which channels, and how far in advance? Who are the different types of users, and how do their needs and contexts differ? Have we tested the approach with real users before rolling it out to everyone?
That last question is critical. Different people in the same organization experience a technology change very differently. I learned to think in terms of employee personas. A project manager running a 20-person software development team from an agile delivery center has a very different relationship with a new tool than a team lead working inside a classified facility where most websites are blocked and phones aren't allowed. A development manager with high technical comfort is different from a veteran fellow with low technical comfort who recently joined the organization.
If your change management approach treats all of these people the same — one email blast, one training session, one FAQ — you'll get adoption from the people who were already comfortable with the change and resistance from everyone else.
The most effective deployments I led included a combination of communication channels (email, team meetings, one-on-one conversations with key stakeholders), identified testing groups from different parts of the organization, and a phased rollout that incorporated feedback from early users before expanding. None of this is rocket science. But it requires someone to plan it rather than assuming that "the tool is intuitive; people will figure it out."
3. Build the operational rhythm to sustain it
A tool that launches successfully can still fail over time if nobody builds it into the organization's operating rhythm.
Every technology that requires ongoing action — recurring reports, periodic reviews, regular maintenance — needs to be connected to the operating cadence. Scheduled, recurring reviews regarding usage, feedback, satisfaction, issues, alignment with other systems, etc.
I've written about operating cadences as your strategy's immune system — the recurring rhythm that keeps the organization on course. Technology deployments that don't connect to the cadence become orphans. They work at launch, when attention is high, and then quietly decay as attention moves to the next priority. The cadence is what prevents that decay.
We built a centralized Recurring Activity Matrix (RAM), a system that auto-generated tickets for every recurring operational and compliance obligation. When we deployed a new vulnerability management tool, the recurring scans and remediation review cycles were immediately added to the matrix. When we deployed a new approval routing platform, the recurring form reviews and system maintenance tasks were added. The technology didn't create the discipline. The operating rhythm did. The technology served the rhythm.
The Inverse Is Also True
It's worth noting that the reverse of this pattern is equally real and equally powerful. When an organization has defined its processes, prepared its people, and built its operational rhythms; then deploying a new tool becomes remarkably smooth.
At one point, we needed to deploy a new enterprise approval routing platform to replace manual email-based approvals. Because the approval processes were already documented (who requests, who approves, what information is required, what the SLAs are), the configuration of the new tool was essentially a translation exercise: map the documented process into the platform's workflow engine. Because we had a change management playbook, the communication and rollout followed a tested pattern. Because the recurring activities were already tracked in our matrix, the ongoing maintenance of the new system was immediately part of the rhythm.
The deployment wasn't easy — no enterprise deployment is. But it was orderly. The technology served a defined purpose, the users were prepared, and the operating rhythm kept it healthy after launch. That's what process-first technology deployment looks like.
The AI Amplification Effect
Everything I've described becomes more important, not less, in the AI era.
AI tools are the most powerful technology most organizations have ever deployed. They're also the most dangerous to deploy without process and change management, because they're capable enough to do things autonomously. A traditional tool that's deployed without a defined process sits idle. An AI agent that's deployed without a defined process takes action — and it takes action based on whatever ambiguous, incomplete, or outdated information it can find.
I wrote about this dynamic in Your AI Agents Are Only as Smart as Your Process Assets. The argument there was about knowledge infrastructure — if your processes aren't documented, the AI has nothing accurate to act on. The argument here is the upstream companion: before you deploy the AI agent, define the process it will support. Prepare the people who will work alongside it. Build it into your operating cadence so someone is monitoring its performance, reviewing its outputs, and catching its mistakes.
The organizations that will get the most from AI aren't the fastest adopters. They're the ones with the strongest process foundations, the most thoughtful change management muscles, and the most disciplined operating rhythms. AI amplifies whatever system it's plugged into. If the system is well-defined and well-maintained, AI makes it faster and smarter. If the system is undefined and chaotic, AI makes it faster and more chaotic.
The Keel Connection
In the Keel Framework, I place technology deployment inside Element 4 — Sail and Adjust. Technology is sail. It makes you faster. But the Keel Framework's core argument is that you can't add sail without building the keel first.
The keel, in this context, is the defined process, the clear roles and responsibilities, the operating cadence, and the knowledge management infrastructure that the technology plugs into. The change management approach is how you raise the sail without capsizing the boat. And the recurring operational rhythm is what keeps the sail trimmed after the initial deployment.
When organizations throw technology at problems without doing this work, they're adding sail to a boat with no keel. The wind catches, the boat tips, and someone says "that tool didn't work." The tool was fine. The boat wasn't ready.
Where to Start
If you're about to deploy a new tool — whether it's a project management platform, an AI assistant, or a new security system — pause and ask three questions before you start configuring anything:
- Can we describe, in writing, the process this tool will support? If you can't draw the workflow on a whiteboard in fifteen minutes — with steps, owners, and decision points — you have a process problem, not a technology problem. Solve that first.
- Have we planned how we'll prepare the people who will use this? Not a training session after launch. A change management approach that starts before the tool arrives: communicating what's changing, why it's changing, how it affects different types of users, and who they can ask when they get stuck.
- Have we connected the ongoing operation of this tool to our operating rhythm? Is there a recurring review of its performance? Are the maintenance tasks and compliance activities it generates tracked in a system with assigned owners and due dates? If the tool requires ongoing human attention to stay healthy, have you built that attention into your cadence?
If the answer to any of these is "not yet," that's where the real work starts — and it starts before the vendor demo.
This is part of an ongoing series on building enterprise operating systems. Read more about the full approach in the Keel Framework, or explore related posts on the ambiguity tax of unclear roles, your operating cadence as strategy's immune system, process assets and AI readiness, and compliance as an operating system.