The paradox of Vibe Coding and the risks of automating assumptions
Michiel Kooiman
—
Programming

Written by
This article was originally written in Dutch.
Using AI is surprisingly accessible these days. With a single prompt, you can already build something that works. And precisely for that reason, this shift may be harder on software engineering than on almost any other field.
The promise, or threat, that AI will replace developers is being taken more seriously every day. And to be fair, we now see plenty of examples of people without a technical background shipping useful software. But that raises a fundamental question: how sustainable is this way of working? Are we moving toward a world where technical knowledge becomes unnecessary? And more importantly, how confident can we be in the safety and correctness of systems we do not fully understand?
The short answer: we are nowhere near that point yet.
The framework problem: AI defaults to the familiar
One early limitation is how AI makes choices. Models are trained on existing codebases and public examples. Without extra context, they will almost always choose the most common solution. Ask an AI to build a web app and chances are high that it will give you a React solution.
That is not a problem in itself. React is mature and powerful. But it does reveal something important: AI optimizes for probability, not quality or innovation.
New ideas therefore start at a disadvantage. An alternative framework must not only be technically better. It must also be widely adopted and visible in training data before AI starts treating it as the default. In that sense, AI mostly reinforces what already exists rather than encouraging new directions.
In other words: we are not only automating code, we are also automating the status quo.
The audit gap: what if we no longer see the code?
There is speculation that AI may eventually generate machine code directly. Whether that is realistic in the near term is debatable. But even if it is possible, the question is whether we would actually want that.
Today we still have a crucial middle layer: readable source code. That layer allows us to verify whether a system does what we intended. Software development is not only about building. It is also about understanding and verifying.
If that layer disappears, so does our most important form of control.
What happens when something goes wrong? How do you debug a system whose logic you cannot follow? At that point, only indirect investigation remains: reverse engineering, or asking the same AI again. In both cases, you lose control.
We would not only be giving up the ability to develop systems. We would also be giving up the ability to judge them.
Prompting is programming, but without guarantees
Even though it feels different, it is essentially still programming. We give instructions to a system and expect a result. The difference lies in the nature of the compiler: an LLM is not deterministic and fills in missing information on its own.
That makes it powerful, but also unpredictable.
This brings familiar problems back in a new form. Where traditional systems were vulnerable to things like SQL injection, we now see similar risks emerging in prompt-based workflows. The boundary between input and instruction is not always clear, especially when external or malicious data is involved.
Research also suggests that behavioral models can be shaped by indirect or seemingly irrelevant training data. That means a model’s decisions can be influenced by patterns we did not explicitly provide, and sometimes cannot even trace back.
In a context where AI generates code, these are not academic nuances. They are potential risks.
Automating assumptions
What is happening, ultimately, may be even more fundamental: we are automating assumptions.
Every time a model fills in missing information, it makes choices on our behalf. About architecture. About security. About intent.
A simple test makes this visible. Give an AI the instruction: “Make Tetris.” You will get a working implementation almost immediately. But what is missing are the questions a developer would normally ask:
- Which platform should it run on?
- How should we store the score?
- Which input methods should it support?
Those questions never come up, because the model is not designed to wait and explore. It is designed to deliver.
For a small project, that may be fine. But software is rarely built for trivial use cases. In serious systems, those questions are exactly what matters.
We would not trust a developer who never asks follow-up questions with critical systems. So why would we trust AI to do that blindly?
In closing: craftsmanship does not disappear, it shifts
AI is changing our field. That is undeniable. But it does not make craftsmanship unnecessary. On the contrary, it shifts where responsibility sits.
Not only in writing code, but in understanding systems. In asking the right questions. In making assumptions explicit.
Because in the end, one principle remains true: software is only as reliable as the assumptions it is built on.
And that is exactly where the challenge of vibe coding begins.

