The Unchanging of Observability and Monitoring

Darkest Days

In looking back over 20 years of building application performance monitoring and management tooling, it seems that not much has changed or achieved beyond that today’s tooling collects data from far more data sources, offers more attractive web-based interfaces and dashboards than previously, all of which can be rendered in dark mode. That last one is not a joke; one application performance monitoring vendor, now turned observability platform vendor, just yesterday announced dark mode as a killer feature.

Unchanging Change

Why is observability so much like monitoring back in 2000 when we started designing and developing our first profiling tool, JDBInsight? There is a repetitive cycle too much of what happens in fashion, but this is systems engineering. We suspect that not much has fundamentally changed to the degree we had hoped because everything else was changing within the tooling environment. Tooling has changed, but yet nothing has changed. That does not make sense, or does it?

Change but no Progress

Much of the change touted by product marketing departments relates to engineering efforts to keep tooling applicable in these new environments of containers, cloud, and microservices. Vendors like Instana, Dynatrace, AppDynamics, and NewRelic spend a considerable amount of their engineering budget simply maintaining instrumentation extensions for the hundreds of platforms, products, projects, and programming languages. So when we say that not much has changed, we refer to the positioning of tooling on a map of progress like the one shown below. Nearly all vendors listed above are stuck within the environment segment, unable to deliver real breakthroughs that effectively change the operational monitoring and management landscape.

ENVIRONMENTCOMPONENTCOLLECTIVE
MEASURECOGNITIONCOOPERATE
MODELCONTROLCOORDINATE
MEMORYCOMMUNICATECOLLABORATE

Linking to Controllability

Cognition, control, and communication are still largely deferred and delegated to humans outside of tooling. Application performance monitoring vendors can keep talking up intelligence without ever having to deliver on what many outside the computing industry consider intelligence to be – (re)action appropriate to the context, stimulus, and goal setting. There can never be real human-like intelligence delivered as a software service without, at minimum, the ability to link past and predicted observation to controllability – an intervention following awareness and reasoning of a situation. Today, it is next to impossible to automate the linking of observability to controllability because the shared communication model, internal and external to tooling and humans, does not exist.

Safer Steering with Signaling

Cognition and control will never emerge from data and details. Traces, metrics, and logs are just too low-level and noisy to be used as an effective and efficient model for tracking, predicting, and learning from human and machine interventions within a system. Irrespective, such yesteryear approaches are not sustainable. In the end, observability and controllability need to be embedded directly within the application software itself. The imbuing of software with self-reflection and self-adaptability has not occurred because observability instrumentation rarely considers the need for local decision making and steering through control valves or other similar control theory technologies and techniques. Instead of thinking about data, pipelines, and sinks, engineers need to refocus on the significance of signals and how they should be scored in inferring status; otherwise, the next 20 years will be much the same.