Foundational Principles of AI and the Channel Go to Market Future
By Kameron Olsen, President, The Channel Advisors April 2026
The Inversion
For decades, the Technology Solutions Distributor channel has operated on a single assumption: the Technology Advisor is the expert. They learn the products. They understand the integrations. They translate feature sets into business outcomes. They carry the knowledge, and the customer trusts them to carry it well.
That assumption is breaking.
There are now over 800 suppliers in the TSD ecosystem across dozens product segments, and the end customer is demanding more tech stack coverage than any individual advisor can deliver. At the Avant Special Forces event, leadership shared a stat that should alarm every supplier in the ecosystem: the top performing Technology Advisors are selling an average of just three products per customer. The bottom performers are selling barely more than one. In an ecosystem where customers need solutions spanning networking, security, communications, cloud infrastructure, compliance, and now AI, three products per customer is a fraction of what the customer actually needs.
The average Technology Advisor works with 2.8 TSDs simultaneously, each with its own portal, directory, and supplier catalog. The number of possible product combinations a single advisor might need to understand runs into the thousands. No human being can hold all of that. And the pace is accelerating: technology is becoming more complex at the same time that business operations are becoming more complex. McKinsey's 2025 Global Survey on AI found that 72% of organizations have adopted AI in at least one business function, up from 55% the year prior. Gartner projects that by 2027, 80% of B2B sales interactions will occur through digital channels.
The channel's response has been to train harder. More product sessions. More certification programs. More supplier presentations at TSD events. Forrester's 2024 Channel Partner Benchmark found that partner engagement programs have less than 25% active participation rates, and that number has likely deteriorated further as private equity consolidation has driven experienced advisors out of the ecosystem entirely. The problem is not effort. The problem is architecture. The model asks one person to be the expert on everything, and that is no longer possible.
Move 37
In March 2016, Google DeepMind's AlphaGo system took on 18 time world Go champion Lee Sedol in a five game match that changed how the world understood artificial intelligence. Go has been played for thousands of years, originating in China, and is infinitely more complex than chess. Beating a professional Go player had been considered a grand challenge in AI for decades, and most researchers believed we were at least ten years away from it happening.
During game two, AlphaGo placed a stone on move 37 that no human player would have considered. The commentators were confused. Fan Hui, the European Go champion who had lost to AlphaGo months earlier, said simply: "It's not a human move. I've never seen a human play this move." Sedol left the room before this move was made. When he returned minutes later, he realized the world had changed. Demis Hassabis, co founder and CEO of Google DeepMind, would later call it "the most beautiful move in the history of Go."
Move 37 has become shorthand for a specific moment in a person's relationship with AI: the point when you realize it is better than you at the thing you do. Not theoretically. Not in a benchmark. In the actual work.
That moment is arriving across every knowledge profession. Screenwriter Paul Schrader, who wrote Taxi Driver and Raging Bull, posted publicly in January 2025: "I've come to realize AI is smarter than I am, has better ideas, has more efficient ways to execute them. This is an existential moment akin to what Kasparov felt in '97 when he realized Deep Blue was going to beat him at chess." He asked ChatGPT for Paul Schrader film ideas. It had better ones than his. He asked it to critique a script he had written years ago. In five seconds, it returned notes as good or better than he had ever received from a film executive.
David Perell, a writer and writing coach with nearly half a million followers, shut down his teaching business entirely. "The world of non fiction writing has fundamentally changed," he wrote, "and many of the skills I've developed and built in my career are becoming increasingly irrelevant."
Noam Brown, formerly of Meta's AI research team and now at OpenAI, framed it precisely: "Everyone will have their Lee Sedol moment at a different time."
For the TSD channel, that moment is not theoretical. It is here. No advisor, no matter how experienced, can hold the complexity of 800+ suppliers across dozens of segments in their head and match the right solution to the right customer problem faster than an intelligence layer that was built to do exactly that. The question is not whether AI has surpassed human capacity to process that complexity. It has. The question is what the humans in the ecosystem do next.
The New Architecture
What if the advisor did not need to be the expert on every product? What if they needed to be the expert on one thing: getting the observation started?
This is the foundational shift that AI makes possible. The advisor's value is not in knowing every product. It is in having the relationship with the customer and the trust to say: "Let us look at how your organization actually works, and let the data tell us what you need."
The observation replaces the pitch. Instead of walking into a customer meeting with a supplier's slide deck and hoping the product fits, the advisor deploys an intelligence layer that watches how the organization actually operates. Not how leadership thinks it operates. How it actually works, in practice, every day.
MedJay's Voyager platform demonstrated this in a proof of concept deployment at a diesel repair operation. The observation found 47 undocumented workflow loops per employee per day, representing 2.5 hours of lost productivity. Nobody in the organization knew these loops existed. The ROI was confirmed on day one of the fix. The advisor who initiated the observation did not need to know which software would fix the problem. Voyager identified the friction. The advisor owned the relationship, presented the findings, and made the recommendation. That is the model: the advisor starts the conversation, the platform handles the intelligence, and the advisor delivers the outcome.
The Data Foundation
The observation layer only works if the data it ingests is clean, consolidated, and contextually labeled. Most organizations attempting AI adoption today are layering intelligence on top of information chaos: customer data in one system, operational data in another, financial data in a third, and process documentation that has not been updated in years. The AI faithfully reads all of it, contradictions included, and produces outputs that reflect the mess it was given.
This is why the failure rate on AI implementations is so high. The technology is not the bottleneck. The foundation is. Organizations that succeed with AI share a common first step: they consolidate their data, eliminate redundancy, establish a single source of truth, and label their information with enough context that an intelligence layer can distinguish between what matters and what is noise.
In regulated industries, the stakes are even higher. A conversational AI system in financial services or healthcare cannot afford to give inconsistent answers. The tolerance for hallucination is zero. This is where the distinction between generative AI and deterministic AI becomes critical. Generative models are powerful for ideation, drafting, and pattern recognition. But when the answer must be the same every time, the underlying data architecture must be airtight before any intelligence layer touches it.
The Channel 2.0 model accounts for this. Voyager does not simply observe and report. It observes within a structured data framework that ensures the workflows it identifies, the friction it measures, and the solutions it matches are grounded in verified, consistent data. Getting that foundation right is not the exciting part of AI adoption. It is the part that determines whether everything built on top of it actually works.
Outcomes as a Service
The economics of knowledge work are inverting. When a technology can produce a complete software application from a prompt, generate a competitive analysis in minutes that would take a human analyst a week, or build an entire assessment platform between 4 AM and 6:30 AM on a Friday morning, the unit of value is no longer time. It is the outcome.
The World Economic Forum's Future of Jobs Report 2025 projects that 83 million jobs will be displaced globally by 2030, while 69 million new roles will be created. The more significant finding is the nature of the new roles: they are overwhelmingly outcome oriented, not time oriented. The market is shifting from paying for hours to paying for what those hours actually produce.
For decades, the tech sector has priced services on effort: hours of consulting, days of implementation, headcount on a project. When the effort required to produce an outcome collapses by an order of magnitude, pricing by effort becomes incoherent. A data consolidation project that used to take sixty days of manual work can now be completed in twenty minutes through automation with an AI layer on top. The value to the customer did not shrink because the effort did. The customer still gets their payroll running on time, their compliance met, their operations streamlined. What changed is the cost of delivery.
This reframes AI from a cost cutting tool to a force multiplier. A 200 person organization where AI multiplies each employee's output by even 30% has effectively added 60 employees worth of productive capacity without a single new hire. That is not cost reduction. That is growth infrastructure. But it requires leadership that can see beyond "how do we do the same things cheaper" and ask "what could we do that we have never had the capacity to attempt?"
Deloitte's 2025 State of AI in the Enterprise report found that organizations focused on AI for revenue growth outperformed those focused on cost reduction by 2.3x in three year returns.
The Widening Gap
There is a separation happening that should concern every leader in the technology channel.
On one side are the people and organizations who understand what AI makes possible. They are building systems, automating workflows, creating compound intelligence loops where every interaction makes the next one smarter.
On the other side are the people and organizations who are waiting. Waiting for it to get better. Waiting for someone to show them how. Waiting for the hype to die down so they can figure out what is real.
Stanford's 2025 AI Index Report documents that AI performance on standard benchmarks has improved by an average of 40% year over year for the past three years. The tools are not getting incrementally better. They are getting fundamentally more capable at a compounding rate. Three scaling laws are compounding simultaneously: more compute makes models smarter, post training reinforcement learning makes them more specialized, and test time inference (giving them time to reason through problems) makes them more reliable. These three forces are accelerating together, which is why the gap between those who adopt now and those who wait is not linear. It is exponential.
The person who learns to work with AI today has a structural advantage over the person who starts in six months. Not because they know more, but because they have built systems that compound. Six months of compound learning and system building creates a gap that cannot be closed by catching up on tutorials.
This is the Move 37 problem applied to business. The organizations building with AI right now are not just moving faster. They are discovering approaches and capabilities that the organizations standing still will never independently conceive of. Just as the best Go players in the world could not have imagined Move 37 before they saw it, the leaders who are not yet working with AI cannot fully grasp what it makes possible until they experience it firsthand.
The Expectation Problem
The most common reason people give up on AI is not that the technology failed. It is that their expectations were wrong.
They expected a magic button. They typed a sentence, got back something mediocre, and concluded the technology is not ready. This is the equivalent of buying a professional kitchen, microwaving a frozen dinner, and declaring that cooking does not work.
The difference between the people producing extraordinary results with AI and the people who tried it once and walked away is not intelligence. It is iteration. The person who treats the first AI output as a rough draft and spends five minutes refining it gets a result that is 90% of the way there. The person who expects perfection on the first prompt and walks away when they do not get it learns nothing and builds nothing.
A major European fintech company made this mistake at enterprise scale. They announced that AI had replaced 700 customer service employees. The market applauded. Then the cracks appeared: customer satisfaction declined, edge cases went unresolved, and the company quietly began rehiring. The lesson was not that AI cannot do customer service. The lesson was that removing the human from the loop entirely, before the system was mature enough to handle every scenario, was a miscalculation of expectations.
The human in the loop is not a limitation. It is the design. AI produces the first 80% at machine speed. The human provides the judgment, the context, the quality control, and the strategic direction that turns 80% into something exceptional.
The Counterargument That Proves the Point
There is a legitimate objection to this thesis. It comes from experienced channel leaders who have built careers on relationships, trust, and strategic advisory. Their argument deserves to be stated clearly, because it is correct, and because it actually strengthens the case for Channel 2.0.
The channel's most experienced voices argue that the Technology Advisor's value was never product knowledge. It was trust. It was accountability. It was the unbiased, third party perspective that a customer cannot get from a vendor. Doug Tolley frames it precisely: the most important value the indirect channel brings is being an "unbiased third party view of technology." Rob Butler goes further: the TA's relationship with the client is so deep that "if it fails, then I have failed our friendship."
These are not people resisting change. They are people who understand what drives revenue. And they are right.
The Channel 2.0 model does not propose replacing the Technology Advisor with AI. It proposes freeing the Technology Advisor from the one job they were never equipped to do well: being an encyclopedia of 800+ suppliers across the dozens of product segments. Every objection raised by channel leaders points to the same conclusion. The TA's value is trust, relationships, and accountability. Those are inherently human capabilities. But the burden of knowing every product, understanding every integration, and translating every feature set into every possible business outcome across every vertical is not a human capability. That is an information processing problem. And information processing is exactly what AI was built for.
Rob Novack identifies the critical dynamic: the "person, AI, person sandwich." The human starts the conversation. AI processes the complexity. The human delivers the insight, owns the relationship, and takes accountability for the outcome. The TA is no longer buried in portal searches and product comparisons. They are doing the work that actually matters.
The concern about AI hallucinations and unreliable outputs is valid. Enterprise clients in regulated industries cannot tolerate false data. This is precisely why the Channel 2.0 model does not ask anyone to blindly trust AI output. The advisor initiates the observation through Voyager. The analysis surfaces patterns through Clarion. The matching engine connects the right supplier to the right problem through the Pavilion. And Sentinel confirms whether the solution actually worked, with measured data the customer can verify independently. At every stage, the advisor is the one presenting the findings, making the recommendation, and owning the outcome. The platform gives the advisor superpowers. It does not replace the advisor's judgment.
The Channel's Crossroads
The TSD channel stands at a decision point that most of its participants do not yet recognize.
Path one: continue the current model. Train advisors on more products. Host more events. Spend more MDF. Hope that the 80% of inactive partners will somehow become productive with one more certification, one more SPIF, one more supplier lunch.
Path two: invert the model. Stop asking advisors to be product experts. Give them a single tool that observes the customer's reality, identifies the problems, matches the solutions, and verifies the outcomes. Let the advisor focus on what they were always best at: the relationship, the trust, the business conversation. Let the platform handle the intelligence.
This is not a future state. The assessment platform, the observation engine, the merit based marketplace, and the verification system exist today. The question is not whether this model works. The question is how quickly the channel adopts it, and who gets left behind in the transition.
Move 78
With two games to go in the match and down three games to one, Sedol regrouped. He consulted with other Go experts. He studied what AlphaGo was doing. Then he returned for game four.
Game four started out much like the three before it. AlphaGo appeared to be in the lead. Then Sedol made an unexpected play at move 78 that seemed to confuse AlphaGo and its creators. The DeepMind team raced to the back room to figure out what had happened. AlphaGo's play became erratic. After the match, the team determined that Sedol had exposed a weakness in the system. He had made a move that AlphaGo assigned a 1 in 10,000 chance of a human player making. The same probability, by the way, that had been assigned to Move 37. The move that shook Sedol was equally improbable by the machine's own calculation. When they asked Sedol how he found it, he said: "It was the only move on the board I saw."
The DeepMind team called it a god move. AlphaGo resigned. Sedol won game four.
As the documentary observed: "At least in a broad sense, Move 37 begat Move 78, begat a new attitude, a new way of seeing the game. His humanness was expanded after playing this inanimate creation."
That is the model. Not human versus machine. Human with machine, each pushing the other beyond what either could achieve alone.
The people who will thrive in this transition share a single characteristic. They do not ask "how do I learn AI?" They ask "how do I get to my outcome the fastest, using every tool available to me?" That is a fundamentally different question. The first leads to tutorials and theoretical knowledge. The second leads to building, deploying, making mistakes, and building again. To waking up at 4 AM with an idea and having it live by breakfast because the tools exist to make that possible.
When your Move 37 moment arrives, you have a choice. You can give in to the anxiety and fear, or you can choose to learn and grow and evolve. AI can unlock human creativity and potential, not replace it, if that is what we give it the opportunity to do.
The channel, for all its complexity and legacy and resistance to change, is about to discover that the advisors who embrace this shift will not just survive. They will become the most valuable people in the entire technology sales ecosystem. Because they will be the ones who stopped trying to know everything and started letting the intelligence do what it was built to do, while they focused on what only a human can.
Kameron Olsen is the founder and president of The Channel Advisors, creator of the Channel 2.0 Methodology, and architect of the MedJay Channel 2.0 Platform. He works with technology suppliers, Technology Advisors, and Technology Solutions Distributors to build channel programs grounded in data, observation, and verified outcomes.
Sources:
McKinsey & Company, "The State of AI: Global Survey," 2025. 72% organizational AI adoption rate. Gartner, "Future of B2B Sales," 2025. 80% of B2B sales interactions projected through digital channels by 2027. World Economic Forum, "Future of Jobs Report," 2025. 83 million jobs displaced, 69 million created by 2030. Deloitte, "State of AI in the Enterprise," 7th Edition, 2025. Revenue focused AI adopters outperform cost focused by 2.3x. Stanford University, "AI Index Report," 2025. 40% average annual improvement in AI benchmark performance. Forrester Research, "Channel Partner Benchmark," 2024. Less than 25% active participation rate in partner engagement programs. MedJay Voyager Proof of Concept, 2026. 47 undocumented workflow loops, 2.5 hours lost productivity per employee per day, Day 1 ROI confirmed. DeepMind, AlphaGo vs. Lee Sedol Match, March 2016. Move 37 (Game 2) and Move 78 (Game 4). Paul Roetzer, "Move 37" Keynote, Marketing AI Conference (MAICON), 2025. Paul Schrader, Facebook posts, January 2025. Noam Brown (OpenAI), post on X, January 2025.