In every boardroom, on every stage, during those oh-so-enjoyable after-drinks at a conference, the same sentence echoes today: “We need to do something with AI.” Everyone nods, but behind that ambition lies a reality that is far less comfortable. When I ask most executives why, with what goal, with what data, and with what architecture, they look at me as if I just asked them to explain quantum physics.
Yet the reality is that most organisations that consider themselves innovative are, in practice, still deep in what I call the Data Repository stage of the Learning Maturity Model I developed in 2014. Data piles up in sealed-off silos, outdated, inaccessible, managed by an IT department that counts the number of documents as proof of progress, while nobody knows what is actually in there, let alone does anything with it. Dead knowledge, neatly stored, far away from the people who could actually benefit from it. And on that foundation, you now want to build AI?
Meanwhile, what is happening on the work floor is a very different story, and a worrying one. While management debates AI strategy and is often already in negotiations with consultants and tech providers who are primarily there to sell (i-Police, anyone?), employees have long since taken matters into their own hands. They use ChatGPT, Claude, or other tools outside the organisation, without governance or any visibility from the top, and the irony is painful: precisely because the official systems work so poorly, people look elsewhere. Shadow AI, with all its associated risks, is the quiet reality of today.
And, while the debate in the executive suite continues, the next step is arguably even more dangerous. With tools like OpenClaw, an open-source platform that lets employees build their own AI agent in a matter of minutes, the demons of the internet that Neal Stephenson warned about in his brilliant 1992 novel Snow Crash seem to have come to life. AI agents in search of their own knowledge in collaborative networks then connect to Moltbook, a social network built exclusively for AI agents, where they post, discuss and exchange knowledge entirely outside any organisational context, in an ecosystem that is completely beyond your control. And if you think that is not yet alarming enough to lose sleep over, there is Molt Road, a black market where agents trade stolen credentials, data and exploits, fully automated, without any human oversight. Cybersecurity researchers now describe OpenClaw, Moltbook and Molt Road together as a “lethal trifecta”: a runtime, a coordination layer and a black market, ready-made for autonomous data leakage at scale. This is not science fiction; this is happening today.
The counterpart to this scenario is equally real but requires a deliberate strategic choice. Technologies like RAG and MCP make it possible to build an intelligent knowledge ecosystem that stays within the organisation. Retrieval-Augmented Generation (RAG) gives AI a corporate memory for the first time: not the generic memory of the internet, but the specific memory of your organisation, fed by your own manuals, onboarding documents, strategic notes and HR policies. MCP takes this a step further: the Model Context Protocol that Anthropic introduced in November 2024 is essentially the USB-C moment for AI, one universal standard that simply works, connecting AI directly to your live data without the spaghetti of a thousand different integrations. Together, RAG and MCP form the backbone of what I call a Cognitive Learning Ecosystem, where AI does not react but anticipates, where knowledge is deployed proactively for strategic advantage, and where that knowledge stays within the walls of your organisation.
But, and this is the point that is said far too rarely in boardrooms where it should be heard: RAG on messy data gives messy answers and MCP on a chaotic knowledge landscape simply connects that chaos more efficiently. So before you point to hallucinating models, take a look in the mirror and acknowledge that you yourself are responsible for that.
The question every CDO should be asking today is therefore not which AI platform the organisation chooses, but at which stage of knowledge maturity the organisation truly finds itself. Not where you want to be, but where you stand right now. Clean up your data. Map out what knowledge lives in your organisation, where it sits, who owns it, and what knowledge has already quietly slipped away. Then build a knowledge architecture that is ready for what is coming, not as a reaction to OpenClaw or Moltbook, but as a strategic choice for an ecosystem that belongs to you.
Katja Schipperheijn is an internationally awarded author, strategist, futurist, keynote speaker & entrepreneur. Her books Learning Ecosystems & The Learning Mindset profoundly impacted the way we approach learning, innovation and leadership. She is also member of the jury of the CDO of the Year Awards.
