From Handmade to Thematic: 15 Years of Learning in AI-Driven Operations
Author: Simon Williams, Founder at WovenLight.
It’s hard to believe how much our approach to using data and AI to drive operational performance has changed over the past 15 years. We’ve lived through distinct eras — each with its own mindset, challenges, and lessons.
Along the way, the biggest learning has been that the ability to learn faster than rivals is the most compelling sustainable competitive advantage.
Here we reflect on our journey charting the evolution from the “handmade” beginnings of data and analytics projects to today’s integrated, thematic use of AI-as-infrastructure that is as much about culture and workflow as it is about the technology.
One constant is that we have always framed AI not as magic technology, but as tooling to help humans and machines to collaborate better across processes. We hope these reflections resonate for fellow leaders navigating the same transitions.
2009–2014: The ‘Handmade’ Era of Bespoke Builds
When we started our journey almost every project was a custom-crafted experiment. Tooling didn’t exist. We often hard to tackle hard problems for the first time with bespoke models built by PhD scientists freshly arrived from academia, with core skills rooted in statistics and data science research.
The challenge was perceived mainly as a technical one, with the focus anchored on the models themselves — the thinking being that more accurate algorithms would drive operational performance. As a result, ownership of these early projects often fell to technical teams, who often looked to set up big data repositories as the first step, hoping to bring together enough data to work wonders.
In this period, competitive advantage seemed to hinge on who had the best model. Clients chased incremental improvements in accuracy and held up model performance as the yardstick. The media hype of the day reinforced this mindset — everyone was talking about “big data”. The implicit promise was that sheer volume of data was the magic ingredient for success — if you poured enough data into the black box, valuable insights would pour out.
Looking back, it’s easy to realise how naïve some of those assumptions were. There were certainly some early wins with those one-off models, but many remained science experiments that never fully translated into operational impact.
It’s clear that we underestimated the importance of integrating these solutions into real business processes. We learned (sometimes the hard way) that more data isn’t automatically better — what mattered was understanding the context of the data, recognising that not all data is created equally, and having graceful methods for coping with these real-world challenges. This era gave us our first scars and successes, and laid the groundwork for a more scaled approach.
2015–2021: The ‘Industrial’ Phase of Scaling Deployment
Having witnessed the impact of machine-learning with handmade projects the next challenge was how to scale these capabilities across the organisation? The thinking shifted from isolated models to pipelines, platforms and processes — essentially, how to cascade models throughout the enterprise so that many teams could benefit.
To do this, we began assembling interdisciplinary teams, blending data scientists with data engineers, business domain experts, and project managers. This mix of skills was crucial. Obviously algorithms alone were not enough; we needed people who understood the plumbing of data and the business context to make AI work at scale.
The challenge in this phase was as much strategic as technical. We often partnered with Chief Data Officers and built analytics centres of excellence to drive a cohesive strategy. These hubs were chartered to spread best practices and ensure that initiatives aligned with business goals. Getting models out of the lab and into production became the new focus. The competitive advantage was now seen in terms of operational deployment. It wasn’t just about developing a brilliant model; it was about how widely and effectively we could deploy it operationally — whether in supply chain optimisation, commercial excellence, or product development.
During this industrial era, we were integrating a variety of data sources, breaking down silos of information. The ‘variety of data’– combining customer data with operational data, third-party data, etc. — became the key ingredient we sought rather than sheer volume. To complement the statistical knowhow we were hiring people who could write code and deploy software. The core profile we prized expanded to include solid software engineering, MLOps, and cloud architecture skills. We had to build robust data pipelines, version our models, monitor their performance, and retrain them regularly. It felt like we were standing up a new kind of factory — one that produced insights and predictions at scale.
This industrial approach taught us a great deal. We learned to avoid pure science experiments that weren’t tied to real business value — every project needed a clear line of sight to operational impact. We discovered the importance of an ‘investor’ mindset; there’s always an S-curve to AI investments. It’s often easy to get the first improvements quickly, but chasing the last few percentage points of model accuracy can come with sharply diminishing returns. Knowing when to stop polishing a model and move on to the next useful application became an important judgment call. Most critically, we saw that technology and data couldn’t work in insolation — success depended on organisational culture and execution. AI had to be taken out of the ivory tower and placed into the hands of people on the front lines. The experience gained helped prepare for the next era, where AI went mainstream.
2022+: The ‘Thematic’ of Workflow Augmentation
AI is now thematic, it’s no longer a niche project or even just a strategic initiative — it’s the central theme of our firm, both in terms of how we partner and how we operate ourselves.
Although keeping up with the pace of technological change is hard, the key challenge is applying AI in a ubiquitous, reliable and trusted way to improve our lives. We talk about “AI-as-infrastructure”, a layer of intelligence woven through every process, product, and service. The thinking today is anchored on augmentation — how AI can support human decisions, streamline workflows, and assist with tasks across the board. Rather than asking, “what model can we build?”, we ask “how can we make this process better for employees and customers?”
One striking change is that the people using AI are no longer specialists on isolated teams — they are everyone. AI-powered tools are being embedded in normal job roles, from marketing and finance to operations and customer service. A call-centre agent might use an AI suggestion tool to better solve customer problems; a factory line worker might be guided by predictive maintenance algorithms; a merchandiser might rely on AI forecasts for inventory.
In this thematic era, the “owner” of AI is essentially the entire organisation, from the CEO to the front line. Leadership sets the vision and ethical guardrails, but adoption happens at the grassroots level in day-to-day work. This broad adoption brings our biggest challenge yet: it’s as much a cultural transformation as a technical one. We must ensure people trust the tools, understand their limits, and are trained to use them well. Change management, not just model management, becomes critical.
Our approach to getting started in this era is different too. Instead of IT-driven data lake projects or centralised labs, we seek out high-impact use-cases with demonstratable value that can be underwritten. This often focus on reimagining workflows or embedding AI to augment teams. For example, we might integrate an AI agent into the workflow of invoice processing or sales lead qualification — not to take over the job, but to handle the drudgery or provide data-driven insights, thereby partnering with the human. The goal is to make AI such a seamless a part of the workflow that it feels like just another dependable tool — almost an invisible helper working alongside us.
In this thematic world we’ve realised that our real competitive advantage lies in our speed of learning. The playing field of algorithms and data infrastructure has levelled — many technologies are open source or widely available, and competitors can copy each other’s model ideas. What can’t be copied so easily is the ability to learn and improve faster.
So we build feedback loops into everything. Every AI system is designed to capture new data from its outcomes — successes and failures — and feed that information back for continuous improvement. We’ve gotten comfortable with deploying a decent model quickly and then iterating, rather than waiting for perfection out of the gate. This tight feedback loop is the magic ingredient of the thematic era. It’s how we outpace rivals: by outlearning them with each cycle. Previously, we’ve discussed that learning faster than the competition may be the only sustainable advantage — and today we are seeing that play out in real time.
While the media hype nowadays is fixated on “AGI” and sensational stories of AI doing everything, our focus is much more grounded. We know that true transformation comes from ‘human + machine + process’ collaboration at scale, not from some standalone super-intelligence. In practice, this means pairing human judgment and domain knowledge with machine crunching power, all within a well-designed business process. Time and again, we’ve seen that a human-and-machine team outperforms either humans or machines alone. The human provides context, ethical judgment, and creativity; the machine provides speed, consistency, and analytical depth. And if you add strong processes to this mix — clear procedures, oversight, and continuous improvement methods — you get solutions that are not only high-performing but also scalable across the organization. For us, AI is no longer a mysterious lab project; it’s part of the fabric of how we operate, much like quality improvement or lean manufacturing was in past decades.
We emphasise that these systems are tools — often sophisticated and data-driven, yes, but ultimately tools to aid humans. By reframing AI in this way, we make it less intimidating and more accessible. Employees don’t need to know the intricacies of neural networks, but they do need to develop good judgment in using AI-driven insights. Education and training now focus on data literacy, critical thinking about model outputs, and knowing the right questions to ask. In the thematic era, human judgment is the critical skill — knowing how to interpret what the algorithm says and when to trust it or override it.
Crucially, we’ve come to value capabilities over models. Any given model may have a short shelf-life or can be quickly replicated by competitors. But an organisational capability — say, the ability to continually turn raw data into useful insight in a particular domain — is a far more durable advantage. This is why we view domain-specific ‘vertically opinionated data’ as gold dust — the data stemming from operations and customer interactions, with all its quirks and complexities, is something competitors can’t easily acquire. When you combine that proprietary data with deep domain expertise, you create solutions that are hard to copy. It might not be perfect data (often it’s messy, fragmented, full of real-world noise and human input), but we’ve learned that it’s okay to use “imperfect” data as long as we are transparent about its quality. We think in terms of “data nutrition” rather than “garbage in/garbage out.” Not all data is created equally — some is high-quality, some is junk, and much of it is in between. What’s not okay is to be blind to which is which, so we put in the discipline, governance, and honesty to know our data and improve it over time.
Lessons Learned Along the Way
After fifteen years on this journey, we carry a few hard-earned scars that influence us today as we continue to integrate data, machine learning, and AI into everything we do:
- Start with what you have. Don’t wait for perfect data or a mythical “single source of truth.” Use the data and tools at hand to get going and build from there. We made more progress once we dropped the excuse of not having ideal data and instead started learning from whatever data was available.
- Be honest about data quality (think “data nutrition”). The old refrain “garbage in, garbage out” isn’t a reason to shy away from messy data — it’s a call to understand your data. It’s fine to work with imperfect or “garbage” data if you also invest in transparency and rigor, knowing which data is reliable and which isn’t. What’s unacceptable is not knowing what you’re feeding your models.
- Avoid science fair projects. In the early days we witnessed clients falling in love with cool experiments that had no path to impact. As an investor we insist on a clear line to business value for our AI initiatives. Every project should solve a real problem or improve a real process — otherwise, it doesn’t leave the lab.
- Adopt an investor mindset. We learned to view AI projects like an investment portfolio. There’s always an S-curve of returns: the first gains come quick, but then plateau. We plan for that, capturing the high-return “easy wins” and being strategic about how much to invest in chasing marginal improvements. This ensures we allocate resources to the next big opportunity at the right time.
- Build feedback loops into everything. The companies that learn the fastest will win. We strive to make every initiative a learning loop, where data from execution feeds back into design. These feedback loops allow us to outlearn our rivals by continuously updating our models and approaches with real-world insights.
- Human + machine + process is the winning combination. Rather than viewing AI as a magical black box or a human replacement, we treat it as part of a team. Marry human expertise with machine intelligence and wrap them in a solid process — that’s how you get breakthroughs that are scalable and sustainable. A great model on its own isn’t enough; how we use it matters more.
- Culture and capabilities over tech hype. Finally, we recognize that becoming an “AI organisation” is more about people and culture than about any specific technology. The term “AI” can mislead — this is about learning and improvement. So we focus on building capabilities: training our people, adapting our workflows, and leading change from the top. Good leadership and consistent best practices are what drive repeatable success.
As AI becomes an everyday part of our lives, we believe that capturing its full value means treating it as a cultural journey. We are, in effect, evolving the company’s culture to be data-and-learning driven. This involves updating talent profiles, redesigning processes, and instilling new norms around experimentation, feedback and learning. In short, the real enabler is not the next breakthrough in algorithms, but good leadership and consistent best practice.
Crucially the capacity to learn and adapt quickly is a superpower in this AI-driven world. As we look ahead, we’ll continue to nurture that capacity — because the organisations that learn faster than the rest will be the ones writing the next chapter of this story.