Loading...
EngineeringArchitectureComplexityUncertainty

The Architecture of Uncertainty

The Architecture of Uncertainty Featured Image

We build software with the vocabulary of construction. We talk about "architecture," "foundations," "scaffolding," and "debt." We draw diagrams that look like blueprints, with rigid boxes and straight lines connecting them. We aspire to build cathedrals—systems that are solid, imposing, and designed to last for centuries.

But this metaphor is a dangerous lie. Buildings are constructed on static ground. The laws of physics do not change halfway through pouring the concrete. A skyscraper does not need to suddenly transform into a submarine because the client decided they actually wanted to explore the ocean.

Software, by contrast, is built on shifting sands. The requirements change, the dependencies rot, the browsers evolve, and the business pivots. Yet we continue to design systems as if we are carving in stone, creating rigid structures that become brittle the moment reality diverges from our initial UML diagram.

We optimize for durability in a medium defined by fluidity. We are building fortresses when we should be pitching tents.

The Fallacy of the Perfect Abstraction

Every senior engineer has fallen into the trap of the "Universal Abstraction." You see a pattern emerging—perhaps three different components fetching data in a similar way. "Aha!" you think. "I can unify this."

So you spend a week designing the ultimate `DataFetcher` class. It handles caching, retries, error boundaries, and loading states. It is a marvel of engineering. It is clean, DRY, and elegant. You deploy it, feeling like a god of order.

Three weeks later, a new requirement comes in. One specific component needs to poll for updates, but only when the window is focused. Your `DataFetcher` wasn't designed for polling. No problem, you add a `pollInterval` prop. Then another component needs to debounce the fetch. You add `debounceMs`. Then another needs to cancel the request if a certain prop changes.

Six months later, your elegant abstraction has become a monstrosity. It takes 14 configuration props. It has internal boolean flags like `isSpecialCaseForDashboard`. It is harder to understand than the three duplicate functions it replaced.

This is the cost of premature certainty. We assumed the problem space was fully understood, so we built a rigid solution. But the problem space expanded, and our rigid solution became a constraint. We traded flexibility for "cleanliness," and we lost.

YAGNI vs. Future-Proofing

"You Aren't Gonna Need It" (YAGNI) is the most cited and least followed principle in software engineering. We nod along when people say it, but in the privacy of our IDEs, we commit sins of speculation daily.

"I'll make this interface generic, just in case we switch database providers," we tell ourselves. "I'll wrap this third-party library in an adapter, so we're not locked in."

In ten years of engineering, I have never—not once—seen a team swap out a database provider without a massive rewrite, regardless of how many repository patterns they used. The "future" we proof against is rarely the future that arrives.

The future that *does* arrive is usually weirder. It's not "we need to switch from PostgreSQL to MySQL." It's "we need to move half our data to a graph database and the other half to a ledger, and query them via GraphQL." Your generic SQL interface is useless in that scenario. It didn't protect you; it just added layers of indirection that you now have to peel away.

True future-proofing is not adding more layers. It is writing code that is simple enough to be deleted.

The Virtue of Decomposition

If we accept that uncertainty is the default state, how do we architect for it? The answer lies in decomposition, but not the kind we usually practice.

We tend to decompose by "layer"—Controller, Service, Repository. This is horizontal slicing. It's great for consistency, but terrible for change. If you want to change a feature, you have to touch three or four layers.

A better approach for uncertainty is vertical slicing. Build small, self-contained modules that own their entire stack. If the "User Profile" feature needs to change drastically, you should be able to burn it down and rewrite it without touching the "Checkout" feature.

This often means—wait for it—duplication.

Yes, duplication. If two features share code, they are coupled. If they are coupled, they cannot evolve independently. When you don't know how a feature will evolve, coupling it to another feature is a gamble. You are betting that they will evolve in lockstep. This is a bad bet.

Copy and paste is a valid architectural pattern. It isolates uncertainty. It says, "I don't know if these two things are truly the same, so I will treat them as distinct until proven otherwise."

The "Good Enough" Architecture

We are often paralyzed by the search for the "correct" solution. We spend days debating Redux vs. Context, or Tailwind vs. CSS-in-JS. We act as if one choice leads to salvation and the other to damnation.

In reality, both choices lead to maintenance. Every tool has tradeoffs. Every architecture has corner cases where it sucks. The goal is not to find the perfect architecture; it is to find an architecture that is "good enough" for now and easy to change later.

The most resilient systems I've worked on were not the ones with the purest code. They were the ones with the most obvious code. Code that didn't try to be clever. Code that didn't try to hide what it was doing behind six layers of "clean architecture."

When you open a file and you can see exactly what it does—fetch data, transform it, render it—without having to jump to definition ten times, that is resilient code. It is resilient because it is understandable. And because it is understandable, it is changeable.

Building for Change, Not Durability

Architecture is often described as "the decisions that are hard to change." This definition implies that our goal should be to make the right decisions upfront so we never *have* to change them.

I propose a different definition: Architecture is the art of making decisions reversible.

Instead of asking "What is the best way to do X?", ask "What is the way to do X that leaves us the most options open?"

  • Don't commit to a massive framework if a library will do.
  • Don't enforce a global schema if local types will suffice.
  • Don't abstract until you have three distinct examples of duplication.
  • Don't optimize until you have a benchmark.

We need to get comfortable with the provisional. We need to accept that the code we write today is not a monument to our brilliance, but a temporary bridge to get us to tomorrow.

The Emotional Weight of Uncertainty

Why do we struggle with this? Because uncertainty is uncomfortable. We become engineers because we like order. We like systems that obey rules. We like the green checkmark of a passing test suite.

Admitting that we don't know what the future holds feels like incompetence. So we build structures to hide the chaos. We build strict types, strict linters, strict architectures. We build a fortress of certainty around ourselves.

But outside the fortress, the world keeps changing. And eventually, the fortress becomes a prison. We can't move because our own rules won't let us.

Embracing the architecture of uncertainty means letting go of the illusion of control. It means building systems that are loose, adaptable, and humble. It means writing code that says, "I think this is right for now, but I'm ready to be wrong."

And strangely, that is the most stable foundation of all.