Rethinking Observability

Rethinking Observability

September 29, 2025

September 29, 2025

Observability has been part of software from the very beginning, though it wasn’t always called that.

In the mainframe era, engineers relied on system logs, counters, and job accounting reports to understand performance. Simple logging and performance monitors were enough because applications ran on a single machine, in tightly controlled environments.

As we moved into the monolithic era, profiling tools and application logs became the standard. Developers could often debug by reading logs sequentially, or by attaching a profiler directly to the application. The system was still “close” and relatively easy to grasp.

Then came distributed systems—services talking to services, spread across clusters and clouds. Traditional methods broke down. The industry responded with the now-famous three pillars of observability: logs, metrics, and traces (with continuous profilers often added as the “fourth”). These became the foundation for what we call modern observability.

But today, we’ve entered yet another era of software.

  • Applications are not just distributed; they are ephemeral.

  • Deployments happen dozens of times a day.

  • Code is generated by AI, often lacking the “common language” that humans use to reason about it.

  • Systems are increasingly hidden behind AI-managed layers, making them harder to understand.

  • Migrations, re-architectures, and hybrid systems are the norm.

And yet, most “modern” observability tools are still stuck in the logs-metrics-traces mindset. Some now apply LLMs or agentic AI on top of this data—but the raw material is noisy. In fact, continuous collection was designed to create noise to make debugging distributed systems possible. The result? Even noisier insights and, worse, hallucinations when AI models try to interpret them.

The truth is:

You cannot ground an AI model on noisy insights.


What’s needed is a new paradigm of observability:

  • Rethink how we collect data—not just exhaustively, but meaningfully.

  • Rethink how we store data—dense and machine-friendly, not human-noisy.

  • Rethink how we expose data—so LLMs can consume dense signals, while humans get distilled clarity.

  • And most importantly: shift the focus from data to insights.

Humans and LLMs consume data differently. A human cannot parse billions of events, but a machine can (think QR codes: unreadable to the eye, trivial to a camera). Machines excel at dense data; humans need abstraction.

That’s why we are rethinking observability at CodeKarma. Not because it’s trendy. But because the way software is built, deployed, and maintained has fundamentally changed—and our tooling must change with it.

Code that knows its karma.

Code that knows

its karma.