Is It Time to Think About Data Models Again?
There was a time when everything in software started on “paper”.
Before writing a single line of code, architects drew diagrams, defined entities, and built data models that captured how a system was meant to operate.
The model came first, the implementation simply followed.
Those models were more than technical artifacts; they were maps of intent.
They described how information flowed, how processes interacted, and how change propagated across the system. In other words, they were a mirror of how the organization itself worked.
The Lost Discipline
Somewhere along the way, data modeling started to feel like a burden.
Speed became the new religion. Teams were encouraged to move fast, to build MVPs, to skip the “theoretical” phase and start coding right away.
The prevailing logic was: “we’ll figure it out later.”
At the same time, modern systems began embedding their own internal logic and models. Each database, each service, each SaaS platform defining its own reality.
We stopped designing operations from the top down, and instead, we began adapting ourselves to the models imposed by the systems we adopted.
What was once a discipline of design became an afterthought. Apparently, we traded understanding for convenience.
The Unescapable Truth: Everything Still Has a Model
And yet, despite all this, we never truly escaped data models.
Every system, every API, every interaction (even the simplest endpoint) operates on a structured model of reality.
Without it, no machine could process, query, or even understand the data it handles.
The only difference is where those models live: now they are scattered across systems, embedded deep inside proprietary logic, and sometimes even hidden from human comprehension.
We didn’t eliminate modeling; we just lost control over it.
The Return of Structure: Why AI Changes Everything
The rise of AI agents has quietly brought this topic back to the surface.
For an agent to operate meaningfully, it needs context, relationships, and causality. It must understand what an entity is, what it affects, and what it depends on.
In other words, AI needs a theoretical model, a structure to reason upon. Without it, even the most capable model becomes a blind executor of tasks.
And this is why we believe systems like Kubling must be seriously considered.
Kubling bridges theory and operation by letting you define lightweight ontologies (entities, relationships, and cause–effect chains) and connect them directly to real systems.
This allows agents (and humans) to interact not with fragmented data sources, but with a unified operational graph that represents your organization as a whole.
Beyond Documentation: Modeling as Operational Truth
As said, during the last decade, data modeling was treated as a preparatory or documentation activity, something that happened before systems were built.
But in modern, distributed environments, models no longer serve only as blueprints. They are increasingly becoming operational artifacts, shaping how systems behave, interact, and evolve.
This shift reframes the question: what does it mean to have a living data model?
Our answer: a model that is not just descriptive, but continuously aligned with real systems, reflecting their current state and interconnections.
In complex organizations, operations themselves can be viewed as dynamic data structures. Each action, event, or configuration change modifies the underlying model.
If that model is explicit and queryable, it becomes a powerful tool for reasoning, enabling both humans and machines to analyze causality, dependencies, and systemic behavior.
Toward Model-Aware Operations
From a theoretical standpoint, operations without models are blind.
They rely on procedural knowledge (scripts, playbooks, manual expertise) without a formal representation of how their components relate.
This makes automation brittle and difficult to reason about.
In practice, however, most systems already rely on implicit models; we just rarely acknowledge them as such.
Take Kubernetes, for example. Have you ever wondered what a “resource” actually is?
Behind the YAML manifests lies a schema, a data model expressed in JSON terms.
That schema defines constraints, validations, and relationships between entities. You can’t create arbitrary documents; you must adhere to the model.
In other words, Kubernetes operations are already model-driven, they just use a document model instead of a relational one.
This principle extends beyond Kubernetes.
Every API, every platform, every system that enforces structure is effectively exposing a model, even if it’s not formally recognized as one.
The challenge is that these models are fragmented, inconsistent, and locked within their own domains.
In contrast, model-aware operations treat these fragments as part of a larger whole. Infrastructure, services, and processes are not isolated mechanisms, they form an interconnected schema.
Each system contributes to a global ontology: a network of entities and relationships that describe not just data, but meaning.
This idea aligns with ongoing research in semantic systems and self-describing infrastructures. The more explicit the model, the easier it becomes to delegate reasoning to agents, validate changes, or simulate consequences before they occur. The model acts as the bridge between declarative understanding and procedural execution.
The Role of Structure in Autonomous Systems
As autonomous agents and LLM-based operators become part of the technical landscape, the importance of structure becomes undeniable.
An intelligent agent can reason over text, but it cannot act safely in a system it does not structurally understand.
For an agent, every decision depends on a model, that is, an internal representation of what entities exist, what actions are possible, and what effects they produce.
This is not a new concept; it echoes decades of research in knowledge representation and symbolic AI. What has changed is that we now have the computational and linguistic tools to bring those models closer to real operations.
The future of autonomy is unlikely to remain unstructured. It may evolve into something increasingly model-driven. The challenge is to design models that are light enough to evolve dynamically, yet rich enough to capture real-world dependencies.
Rediscovering Modeling as a Cognitive Tool
Perhaps the most important realization is that data modeling is not just a technical discipline, it became a cognitive one.
It forces clarity. It reveals assumptions. It creates a shared language between humans, machines, and organizations.
When done well, a model does not constrain creativity; it enables understanding.
It also gives teams a way to reason about systems beyond code and configuration, a space where operations become explicit.
So maybe it is time to think about data models again, not as relics of the past, but as instruments of comprehension in a world that demands ever more automation and autonomy.