We Are Building AI on Broken Foundations. Here Is How to Fix It.
By Kameron Olsen, President — The Channel Advisors
I want to talk about something that has been on my mind for a while, and I think it is one of the most important conversations we need to be having in the channel right now.
AI is not failing because the technology is bad. It is failing because we are asking it to learn from us, and we have not done the work of figuring out what we actually know.
MIT's NANDA lab released a report in 2025 showing that 95% of AI initiatives at large enterprises are failing or underperforming. IBM found that 42% of companies abandoned the majority of their AI projects last year, up from 17% the year before. Those numbers are not a technology problem. They are a process problem. They are a documentation problem. They are a self-awareness problem.
And it is a problem I have been watching play out in the channel for years.
The Way My Brain Works
I am wired to see systems and patterns. When I spent time at Telarus working with suppliers who were struggling in the channel, I kept asking simple questions. How many new partners did you meet this year with your MDF? How many of those partners actually sold something? Where are your partners in the pipeline and what are your conversion rates of onboarding?
Most channel chiefs could not answer those questions. Not because the information did not exist somewhere, but because their programs were running on gut feel. There were no documented systems. No baseline metrics. No way to even locate the problem, let alone solve it.
I had a mentor who used to say: when something is broken, it is one of three things.
The process was not documented.
The process was not followed.
Or the process itself was wrong.
His rule was simple: go figure out which one it is. That lesson changed how I see everything, including AI.
AI Is a Brain. But Whose Brain?
When I started The Channel Advisors, my first goal was to document the channel. Not to sell something. Not to build a product. Just to get the knowledge out of people's heads and onto paper so we had the process documented. I searched, but I couldn't find it anywhere.
I spent years in conversations with channel leaders, technology advisors, TSD employees, and field reps. I asked what they were doing, how they were doing it, and why. I recorded those conversations through my Channel 2.0 Podcast and other methods for years. I aggregated the patterns and I used that foundation to build the Channel 2.0 Methodology™.
That process taught me something that I think is the most important insight about AI that most people are missing.
An AI system is only as good as what you teach it. Not what you tell it. What you actually teach it.
Here is the way I think about it. Imagine building a rubber band ball. You start with one band, which is your first document. But when you ask AI a question, that first draft is not quite right. It is statistically correct, maybe, but it does not carry your nuance, your experience, your judgment. So you work it. You massage the output. You add context. You correct what is wrong. And when you have a document you fully stand behind, you put it into your knowledge base. Now when you build the next document, the system draws on that approved foundation. And you do it again. And again. Until the knowledge base actually reflects how you think and what you know. This is you documenting the process.
That is not a magic trick. That is a discipline. And it is exactly the discipline most organizations skip.
Research from Perforce found that 47% of employees regularly work on the wrong version of a document without realizing it. M-Files found that 83% of workers lose time every single day to document versioning problems. Studies show that 30 to 50% of internal documentation becomes outdated within 12 months of being written.
We are building AI brains on top of those documents and data.
You Cannot Automate Your Way Out of a Broken Process
This is the part of the AI conversation that is not getting said loudly enough. This is where the process is broken as described by my mentor.
If you systematize something that is broken, you do not fix it. You accelerate it. You scale the dysfunction. And then you wonder why your proof of concept failed.
The MIT research on why AI projects fail is remarkably consistent. The top reasons are poor data quality, no clearly defined business problem, and automating processes that were already broken before AI touched them. These are not technology failures. These are leadership and process failures that technology exposed.
The right approach is uncomfortable because it requires slowing down before you speed up. It requires sitting with the people who actually do the work, not just the people who manage it. It requires understanding the workarounds they have built, the tribal knowledge that never made it into any document, the one-off exceptions that only one person in one department fully understands, and where the process is broken.
Think about what happens when you bring a new employee into a company with bad documentation. They learn the wrong habits. They learn the wrong processes. The company manual and process books say one thing. Reality is something else. They eventually figure it out, but only by sitting at the feet of someone who actually knows how things work. The process and books never get updated. The exceptions never get documented. And you rely on key people to hold the whole thing together until they leave.
That is how most organizations operate today. And that is the foundation most AI projects are being built on.
What the Channel Has Always Known
Here is the part I find genuinely exciting, and the reason I believe the channel is actually ahead of most industries in understanding this.
The channel has always been built on conversation. On relationships. On the kind of knowledge transfer that happens when a technology advisor sits across from a supplier and says: here is what my customers actually need, here is how they want to buy, here is where I have seen programs like yours succeed and fail.
That is exactly the kind of grounded, contextual knowledge that makes AI powerful rather than dangerous.
When you build AI on top of documented channel processes, on real conversations with real practitioners, on validated frameworks that reflect how deals actually move and how people actually work, you get something that can find the gaps, surface the disconnects, and point toward solutions that are grounded in how the channel actually operates.
But when you skip that step and deploy AI on top of assumptions and outdated documentation, you get an expensive way to move fast in the wrong direction.
Give People Their Jobs Back
I want to end with the part of this that I care about most.
People are not built to do the same thing over and over. Data entry, swivel-chair work, the monotonous repetitive tasks that drain the soul out of talented people. Machines are built for that. It is what they do well and what they want to do.
The promise of AI is not that it replaces human judgment. It is that it takes the parts of the job that were never worthy of human judgment and handles them, so that people can do what they are actually built for: building things, creating solutions, having conversations, making decisions that require empathy and experience and creativity.
But that promise only gets delivered if we do the foundational work first. If we understand where we are before we try to automate where we want to go. If we build AI on truth instead of assumption.
The channel is at an inflection point. The organizations that slow down long enough to document what they actually know, build AI on top of that foundation, and use it to sharpen their human judgment rather than replace it, those are the ones that will define what this industry looks like in five years.
The ones that skip that step will spend a lot of money accelerating in the wrong direction. Which one are you building?
Co-written by Kameron Olsen, President, The Channel Advisors and Jordan Ellis, Chief of Staff, The Channel Advisors