Gazing at a Crystal Ball
What is software engineering? A mixture of science and craftsmanship, built on rules and logic. Once done, we expect a program to run the same predictable way, over and over again.
Applications therefore are deterministic: the same input leads to the same output (if we treat state and side effects as input). But we still somehow have bugs, not because the app magically changes its behavior, but because programs are complex, and because we are humans.
So we call software deterministic, and treat it this way. We build tests to prove it, we write documentation assuming it is always true for the given app version, and we design automation expecting the same inputs to give us the same outputs, over and over again.
And this is why it is called “engineering”, not “craftsmanship”. Even though skill is required, there is a clear and universal truth about how to build software, so you can learn it at university. Well, not really, product development is still a mess. But let’s simplify for a moment.
Well, not anymore.
I’ve got a Crystal Ball
I’m a latecomer to the AI party. I have used it a lot ever since, from ChatGPT to code assistants, but it was never my main tool. I mastered prompts and validations, pipelines and MCP, but still, for me it was always “another API call” in terms of software architecture. And as for my own work process, well, AI is a nice sparring partner, with infinite knowledge yet a lack of context awareness.

And then I got my hands on agentic engineering. It wasn’t that spectacular at the beginning: well, an agent does something with your input, hopefully you like it enough to tell the agent to refine its suggestion, and after some back and forth, it’s good enough (or you are tired enough) to accept the change.
But then I watched people use it really hard. Building frameworks with dozens of skills around them, turning their whole process into something different from what I got used to, not building a machine anymore, but talking to a magical crystal ball. It’s not the power of logical circuits they try to tame, but some magical energy to route into this realm.
And here is what I found the most scary, but at the same time the most amazing:
It is not deterministic
Harnessing the Flow
So, using AI agents for software engineering is really something. It’s shaky, strange, unpredictable. We use so many tricks and fine-tuning to make it do what we want, and skills really matter here. It’s real craftsmanship, knowledge is nothing (it gets outdated tomorrow), but experience is everything. Professionals know how to speak to the “orb” exactly, how to structure the prompts, how to avoid silly bugs, and how to burn fewer tokens. And despite some documentation and even books (already outdated when you hear about them), it is still far, far away from what we call engineering.
But we are still engineers, right? And we want the systems we build to be not only resilient and maintainable, but predictable, maybe even more than others.
So we have to deal with a non-deterministic thing in our deterministic world. Therefore, we need to develop new approaches for making a non-deterministic tool produce deterministic-enough results. One such approach is harness engineering. In contrast to a directive approach, when agents are told exactly what to do (and the quality depends on the accuracy), harness engineering focuses on crafting boundaries inside which agents float freely, constantly evaluating the results of their work against the desired state.
For me, it honestly looks closer and closer to project management. And the main skill now is not how to build the system, but how to enable others to do so - by providing good documentation and clean boundaries.
Good documentation - clean, minimalistic, well-structured. You can’t load a huge book into an LLM context (and you can’t do that with the human brain either). Instead, thoughtful structure with small pages and clean navigation enables agents to load chunks of information they need to accomplish the task.
Clean boundaries: transparent and up-to-date rules on what to do and what not to do. If the rules are intricate, the agent will spend its capacity on untangling them, not solving the task. Again, like with humans.
So, you are an engineering manager now, congratulations. But there is one more thing to do.
Paradigm shift
The main argument of critics of agentic engineering is: “how can we trust a non-deterministic tool (AI) to design a deterministic system (software)?” But what is software? A tool to fulfill business needs. And by nature, such needs are not that deterministic either. Thus, the major friction point often is translating business needs into engineering jargon, and later back again while getting tech accepted by the business.
But do we need this at all? I think yes, but not that much. We can think about what the business needs, and verify whether the app fulfills it. The industry mastered the art of acceptance criteria long ago. But besides that, we can try to live with non-deterministic tools building stuff for non-deterministic customer needs, and in the end, we are still in the loop to bring some order into chaos.
And there is the paradigm shift: from working with predictable tools like software, where (if really needed) you can go deep down to assembly code to verify your assumptions, we switch to “a crystal ball”, a tool requiring sophisticated skills to master, with its own character and demeanor. You can never be sure what you can get, and now we have to tame this wild beast to solve business problems with tech solutions.
We are now forced to gaze at the crystal ball (Codex/Claude Code), route its magic juice into our realm (GitHub/Bitbucket), and use magic circles (prompts/skills) so the magic will make no mistakes. And we call it Agentic Engineering.
comments