The first thing to say about “AI” is that “AI” is largely a branding term rather than a clearly defined technical concept. Computer models have existed for a long time, and much of what we now describe as AI is an evolution of ideas and techniques that have been developing for decades.
When we talk about AI today, what we are often (but not always) referring to are large language models and generative models. These generative models are mostly focused on computer-generated outputs such as text, images, code, and other media.
What can current “AI” models do?
The current state of AI is based on building large network models that essentially perform pattern matching. In many ways, this is conceptually similar to how auto-text or auto-complete works.
You provide an input sentence, and based on all the sentences the model has previously seen during training, it predicts what a sensible response should be. It does this by identifying patterns in the data it has been fed and selecting outputs that statistically align with those patterns.
However, how these models have been built and trained is starting to face some hard limits. The previous paradigm in AI development was the belief that if we could feed enough data into these models, they would continue to improve indefinitely. The assumption was that, eventually, models would become so capable that reasoning and logical behaviour would emerge naturally within the model itself.
What we have actually found is that there is a limit to how good models can become by simply training on larger and larger datasets. We have now used most of the available data, and the data centres required to train these models are increasingly constrained by physical reality—energy, materials, and fundamental limits of computation.
While the valuation of many of AI tech companies is built on unlimited capabilty growth given unlimited data, power and compute, in reality we do not need bigger models or more data to make progress.
The next phase of AI development
The next phase of AI is likely to involve using large language models in combination with other existing computer models.
Rather than trying to make a single model do everything, a language model could act as an interface layer. It could help determine which specialised tools or models to use, what inputs to provide to them, and how to assemble the results into a useful output.
This approach takes much longer to develop because it requires significant human-led design, integration work, and domain knowledge. It is not simply a case of building a larger data centre and feeding in more data.
The future is less about scale and more about how we configure clusters of models to do specific, useful work.
A useful way to understand this is to look at chess.
Large language models are very bad at playing chess (you can give this a go yourself). They can produce responses that look like sensible chess moves, but they are not actually playing the game. They often break the rules, make illegal moves, and have no strategy for winning.
By contrast, computer models that play chess well are deterministic and purpose-built. They are designed specifically around the rules of chess and logical consistency within the game. These systems are not general-purpose AI models.
What do this tells us about the future of AI?
The likely future is not a general AI that suddenly becomes good at chess (or good at any specific thing). Instead, the future will be about using AI models to create a clean, intuitive interface to an existing chess engine.
When you play chess with an AI in the future, you won’t be playing chess with a language model. You will be interacting with a language model that is communicating with a chess engine on your behalf.
That pattern of AI models orchestrating specialised tools is where real value will come from.
What does this mean for most people?
To get meaningful value from AI, we need to identify tangible use cases. A significant amount of human-led development, composition and configuration will be required. This involves building systems where different models work together, each doing what they are actually good at. Generally, this is where tech companies struggle, and why we don’t just have project managers and developers - we have whole industries of management consultants, enterprise architects, solution architects, business analysts, software confgurators, project engineers and much more (this is basically my day job).
and so that is where we are today. Not at the point of general intelligence, but continuning humanities long process of developing engineering and computer science solutions that solve real problems.
P.S.
If you liked this blog, the best compliment you could give me would be to share it with someone else who might like it.
P.P.S.
Do consider subscribing for updates. You can also find me on LinkedIn.