Two very distinct hemispheres seem to be forming within the application monitoring and observability space – one that is dominated by measurement, data collection, and decomposition, the other by meaning, system dynamics, and (re)construction of the whole. For now, it seems the left hemisphere, the data-centric side, is winning attention in the theater though failing in reality and practice.
Somewhere along the way, observability, much like monitoring, was hijacked by data-centric vendors peddling their ability and capacity to collect ever-increasing amounts of data without ever having to truly justify the cost and complication, as well as the cognitive burden, for all this inefficient and ineffective data collection, transfer, and storage. It was unfortunate that interest in controllability, the primary purpose of observability, did not rise during this same period as this would have probably steered software and systems engineering teams away from an addiction to data; driven out of fear and without much regard for utility and meaning.
I have worked with and for application (performance) monitoring vendors, now marketing themselves as observability solution providers. In all that time, I’ve never seen a system in place that could accurately track the value derived from the enormous amounts of data collected. A question I often asked was how much of the data collected was novel (informative) and consumed by a user, directly or indirectly. No one seemed to have an answer or a willingness even to begin to collect and calculate such information. They seemed to fear the answer because it would callout so explicitly what many software engineers suspect of tracing, metrics, and logging – signals are, for the most part, lost in the noise of big data. Each time a proposal for a more direct approach to signal extraction at source was made, it was always shot down by data-centric engineers with exceptional use cases. Rarely did we get to the point in the discussion about how to attribute additional meaning, learning, and action to the datasets that surfaced in the product and to a user.
Data collection is easy and lazy, meaning construction is hard and vigilant. What is left today is a vast wasteland of data sold as an investment in the future of unknown unknowns. Investment is now of a junk status – data junk that is. We have a new modern California Gold Rush, where again merchants (data related) make excessive profits over miners (hapless users). The shuffling of the pan is now being replaced by an ad hoc query and analytics interface used in search of a golden signal without much guidance from and understanding of the whole and dynamics. Our attention, a most precious and protected resource, is now wasted on tiny deposits of data swishing around in the pan. Vendors have cleverly moved the costs to users, and users seem content with simplistic though not very fruitful forms of data manipulation. One prominent observability vendor goes so far as to emphasize the doing (with data) in their marketing material as opposed to what we would expect of information and intelligence – sensing, signifying, and synthesizing.
With the left hemisphere, it is all about narrowing things down to a certainty – a fact related to a prior happening. In contrast, the right hemisphere accepts uncertainty and, from a systemic perspective, is open to possible dynamics of the process and flow. The left is mostly request and payload oriented, the right more so conversational and contextual. The right is more in the present in being able to quickly and intuitively assess the overall system and process(es) in the now, whereas the left is about creating splintered (traced) narratives of the past. The left seeks out a single truth, or root cause, by way of the data. The right embraces the multiple potentials and possibilities in the dynamics it gives attention to. The right looks to reconstruction and meaning for greater understanding. The left prioritizes the collection of data over all else. The left is about the What, whereas the right is the How. The right is more adept at identifying problems; the left, on the other hand, aims to detail the problem when found by the right. For the right, situational awareness is of utmost importance and prominence; for the left, it is event datum mostly devoid of an overarching context. The left has a preference for sequential, with tracing and logging being the main examples here, while the right is far more parallel in its processing. The right is concerned with the synthesis of information, whereas the left is fixated with analysis and categorization. The left sees the trees, calls trace trees, the right the forest, an ecological system. The right is attuned to consistency, fast approximations of the state of the system and environment, whereas it is the confirmation, incredibly slow and laborious, that the left favors most. The left relies on users to reactively search a data (re)collection, while the right actively suggests based on pattern recognition.
We need both the left and right, but up to this point, we have all but neglected one side, the right, over the other, the left. We need for controllability to follow in hand with observability to keep checks in place on unsustainable resource consumption and costs. We need for componentized bundles of observability and controllability to be distributed to the edges and subnetworks with local autonomy. The right needs to regulate much more the degree of attention and other resources given to the left – the bigger picture must return, but not one consisting of data-laden dashboards or ad hoc querying tools. The days of data excess must come to an end with a return to simplicity and significance; otherwise, we’ll be lost aimlessly walking around in a data fog clouding judgment and hindering action.