artificial intelligence

What you need to know about AI at the end of 2025

The first thing to say about “AI” is that “AI” is largely a branding term rather than a clearly defined technical concept. Computer models have existed for a long time, and much of what we now describe as AI is an evolution of ideas and techniques that have been developing for decades.

When we talk about AI today, what we are often (but not always) referring to are large language models and generative models. These generative models are mostly focused on computer-generated outputs such as text, images, code, and other media.

What can current “AI” models do?

The current state of AI is based on building large network models that essentially perform pattern matching. In many ways, this is conceptually similar to how auto-text or auto-complete works.

You provide an input sentence, and based on all the sentences the model has previously seen during training, it predicts what a sensible response should be. It does this by identifying patterns in the data it has been fed and selecting outputs that statistically align with those patterns.

However, how these models have been built and trained is starting to face some hard limits. The previous paradigm in AI development was the belief that if we could feed enough data into these models, they would continue to improve indefinitely. The assumption was that, eventually, models would become so capable that reasoning and logical behaviour would emerge naturally within the model itself.

What we have actually found is that there is a limit to how good models can become by simply training on larger and larger datasets. We have now used most of the available data, and the data centres required to train these models are increasingly constrained by physical reality—energy, materials, and fundamental limits of computation.

While the valuation of many of AI tech companies is built on unlimited capabilty growth given unlimited data, power and compute, in reality we do not need bigger models or more data to make progress.

The next phase of AI development

The next phase of AI is likely to involve using large language models in combination with other existing computer models.

Rather than trying to make a single model do everything, a language model could act as an interface layer. It could help determine which specialised tools or models to use, what inputs to provide to them, and how to assemble the results into a useful output.

This approach takes much longer to develop because it requires significant human-led design, integration work, and domain knowledge. It is not simply a case of building a larger data centre and feeding in more data.

The future is less about scale and more about how we configure clusters of models to do specific, useful work.

A useful way to understand this is to look at chess.

Large language models are very bad at playing chess (you can give this a go yourself). They can produce responses that look like sensible chess moves, but they are not actually playing the game. They often break the rules, make illegal moves, and have no strategy for winning.

By contrast, computer models that play chess well are deterministic and purpose-built. They are designed specifically around the rules of chess and logical consistency within the game. These systems are not general-purpose AI models.

What do this tells us about the future of AI?

The likely future is not a general AI that suddenly becomes good at chess (or good at any specific thing). Instead, the future will be about using AI models to create a clean, intuitive interface to an existing chess engine.

When you play chess with an AI in the future, you won’t be playing chess with a language model. You will be interacting with a language model that is communicating with a chess engine on your behalf.

That pattern of AI models orchestrating specialised tools is where real value will come from.

What does this mean for most people?

To get meaningful value from AI, we need to identify tangible use cases. A significant amount of human-led development, composition and configuration will be required. This involves building systems where different models work together, each doing what they are actually good at. Generally, this is where tech companies struggle, and why we don’t just have project managers and developers - we have whole industries of management consultants, enterprise architects, solution architects, business analysts, software confgurators, project engineers and much more (this is basically my day job).

and so that is where we are today. Not at the point of general intelligence, but continuning humanities long process of developing engineering and computer science solutions that solve real problems.

P.S.

If you liked this blog, the best compliment you could give me would be to share it with someone else who might like it.

P.P.S.

Do consider subscribing for updates. You can also find me on LinkedIn.

Outcomes not process

There’s a common line doing the rounds:

“AI won’t take your job. Someone using AI will.”

And while that’s probably true, I think it actually misses a much wider point. One that’s less about technology and much more about people, culture, and behaviours.

Most of us approach work (and life) in one of two ways. Some people are solutions-orientated, and some people are process-orientated.

For years, a lot of corporate cultures have encouraged process-orientated behaviour. It’s safer. People feel like, “I don’t have the authority to challenge how things are done here, but if I follow the process, no one can blame me.” And fair enough! That’s how many organisations have been designed to work. That’s the culture they’ve developed by rewarding those behaviours until it’s become the unconscious and unsaid way in which people are expected to behave.

But there is another way of operating. Outcomes-orientated people don’t worry too much about the process. If the process doesn’t deliver the outcome, they feel internally empowered to change it, fix it, or work around it. And ideally those people find themselves in an organisation that welcomes that kind of constructive challenge. Organisations that empower their people to take ownership and focus on outcomes.

And this is where AI makes that cultural divide much sharper.

So much of AI (or as I prefer to describe it ‘applied computer science’) is going to automate or accelerate processes. If your approach to work is simply to follow process, you’re at real risk of the processes you follow becoming commodified. And commodified processes are perfect to be replaced by automation and computer models.

If you’re focused on the outcome you’re trying to achieve, then computer models become tools that help you deliver outcomes more quickly, more efficiently, and more creatively. You’re the person using AI to deliver something better not the person being replaced by it.

Organisations need to pay attention to this. This is existential for companies who operate in a competitive market.

The companies that empower their people to focus on outcomes, challenge broken processes, and improve how things get done will adapt fastest to the new technology environment. They’ll use AI to become significantly more effective and efficient. They’ll be exciting places to work. The future, Today!

The organisations that cling to disempowering, process-first cultures won’t just fall behind, they’ll become commodified and will be replaced by AI.

The truth is simple: People who are empowered to deliver outcomes will thrive in an AI world. People who retreat into process are the ones in danger.

The robots aren’t coming for everyone, just for the jobs where humans weren’t really allowed to think in the first place.

AI Isn’t a Feature. It’s the Future.

AI Isn’t a Feature. It’s the Future.

The rise of general-purpose AI assistants like ChatGPT and Gemini signals more than a tech upgrade—it’s a fundamental shift in how services are accessed, and who gets cut out. Platforms like Trainline may soon find themselves bypassed entirely, as personal AI agents handle everything from planning to purchase on behalf of users.

In this post, I explore how AI is reshaping the value chain, what this means for businesses built on user interfaces and search marketing, and why we may be entering a golden (but temporary) age of AI-first convenience.