Agentic AI: All Plan, No Process

For the last few years, AI development has followed a familiar script: a new model is released, bigger than the last, promising smarter reasoning, faster responses, and broader capabilities.
Now, it’s starting to feel like we’re squeezing the last drops out of the sponge.
In recent months, OpenAI, Anthropic, DeepSeek, Google DeepMind, Meta and others have all launched their most advanced models yet. Some boast “thinking” capabilities, others feature massive context windows, multimodal input, or reasoning across tasks. But for all these architectural leaps, the response has been… muted.
Consider OpenAI’s most recent flagship, its biggest and most experimental model to date, designed to solve problems through planning and multi-step reasoning. As Ars Technica bluntly put it: “It’s a lemon“.
So, what’s going on? Have we reached the end of the road for new models? Or are we just expecting the wrong things from them?
Why Bigger Isn’t Better Anymore
For years, AI progress came from scaling. More data. More compute. Larger models. Bigger breakthroughs.
That curve is flattening, quickly.
Most of the internet has already been consumed. What’s left is increasingly behind paywalls, buried in enterprise systems, or more concerning, generated by other AI’s. We’re now training models on synthetic content, like students copying each other’s homework without ever getting it marked by a teacher.
And even when new human-created content shows up, it’s often… not exactly “enterprise-grade”.
Sure, we could feed the next model another 100,000 posts debating whether the onions go on top or bottom of a Bunnings snag, but will that help it review a legal contract? Or analyse a supply chain bottleneck?
Unlikely.
Training costs are exploding, gains are shrinking, and usefulness is increasingly questionable. It’s not that AI is done, it’s that scaling alone no longer delivers meaningful value.
Agentic AI: Slower, Not Smarter
One of the latest trends in AI is what many are calling “agentic models” or “thinking AI.”
Instead of solving a task in one go, like writing an email or generating a report, these models attempt to break it into smaller steps: plan, search, reason, revise, respond. It is a shift from instant answers to structured, sequential thinking.
But here is the catch. They are still using the same underlying model, the same brain, just stretched out across more steps.
They do not actually understand the problem better. They just take longer to solve it, following a more orderly process. Sometimes that helps. Often, it just burns more compute.
And when they fail, they still fail confidently. They still hallucinate facts and miss context. They are just slower, and more expensive, when they do it.
Because fundamentally, these models do not understand anything. They are just pattern matchers, not thinkers. They do not know what is true. They know what sounds plausible next. Giving them extra steps does not make them smarter. It just makes them sound like a flat-earther with a PowerPoint.
Where Agentic AI Actually Works
To be fair, agentic AI is not useless. Far from it.
The newest models can do some genuinely impressive things. They can search the web, analyse documents, interpret images, run code, and string all of that together in a single flow.
That flexibility makes them incredibly useful for exploratory, loosely defined tasks, the kind of work where there is no fixed path or measurable process, and “close enough” is good enough.
You can upload a long contract and ask it to highlight key clauses. Sketch out a rough strategic idea and get a few directions to explore. Drop in a pile of customer feedback and ask it to surface themes or next-step suggestions.
It is great for momentum. For moving from zero to “okay, now we are getting somewhere.”
But that is very different from running a compliance check, reviewing a building application, or analysing a supply chain risk.
This is where the limits start to show. The model does not truly understand your process. It just guesses a little more carefully.
In creative work, that might be fine. In structured enterprise processes, it is not.
Agentic AI is close. It is getting faster, smarter, and more capable with every release. But when it comes to the boring, business-critical work that has to be done right every time, it is still not ready to take the wheel.
It is not ready to run the BBQ, but it might be handy with the onions.
Building Experts, Not Generalists: Where AI is Really Headed
Today’s foundation models are impressively broad. They can translate over 200 languages, debug complex code, generate images from text, write poetry, and summarise legal documents… all at the same time!
It’s remarkable, but also unnecessary for most use cases.
If I just need an AI to review a building application against a set of construction codes, do I really need it to also explain quantum physics in pirate speak?
That’s why the real innovation now is in refinement, not reinvention. Instead of chasing raw capability, the AI field is focused on making existing models more predictable, efficient, and domain-specific.
This is where techniques like fine-tuning, reinforcement learning from human feedback, and model distillation come in. They aim to convert broad, general-purpose models into smaller, reliable tools that solve real problems. They don’t make the model “smarter”, they make it useful.
Because most businesses don’t need one AI that tries to do everything. They need a set of reliable tools that are great at one thing, not a jack-of-all-trades who tackles every task with confidence, delivers errors with style, and adds “strategic thinker” to their email signature.
Where’s the Business Case?
Here’s the bigger issue: where’s the actual value?
Most organisations have spent years refining core processes like logistics, payroll, compliance and customer service through a mix of software, structured workflows and institutional knowledge. These systems are mature, efficient and trusted. Most importantly, they are tightly scoped, with a clear purpose and well-defined boundaries.
Now AI steps in and says: “Let me rethink all of that for you.”
But do you really want a model reinventing the process every time, when what you actually need is something that just makes the sausage the same way, every time… without deciding to try pineapple instead of onions!
The pitch for AI needs to evolve from replacing human thought to reinforcing proven systems. Right now, many so-called “intelligent agents” are solving problems no one asked for, or creating new ones with hidden risk and unpredictable behaviour.
Beyond the Hype: Where AI Gets Real
So, is this the end of new models?
Not quite. But it is the end of thinking that bigger models automatically deliver bigger value.
The next chapter of AI won’t be won by scale, it will be won by context. The real opportunity lies in applying AI to the systems, processes and decisions that businesses already understand and trust.
That means putting AI in the hands of the domain experts and problem-solvers who know how things actually get done, not to replace them, but to amplify them.
We don’t need general-purpose intelligence riffing on how your workflows might work. We need purpose-built tools that fit into real processes, adapt to real constraints, and deliver real, trusted, outcomes.
That’s where Generative AI goes from hype to help, when it’s guided by the people who know what a good result looks like.
We don’t need AI creating a deconstructed Bunnings snag… just stick to the process of bread, sausage, sauce with onions on top. The way nature intended.
Want AI that knows its job and fits your business? Here’s how we help