On Data

In complex systems, multiple data points must be synthesized.

Considering one of the points of the data triangle alone will afford only a partial and therefore deeply inexact view of a complex, human situation. By synthesizing the three points together, however, organizations can see illuminate the large, motile truth space of their most wicked problems.

“How do we know when we’re successful? We hear the stories all the time,” So starts the Measurement essay from 2017’s report on second generation Community Veteran Engagement Boards (CVEBs). This problem points to one of the most pernicious issues around creating change in complex environments, especially those without the baselines of profit to guide them: how do teams measure their impact?

Through a combined effort of reading authorities on the subject, such as Arup’s City Resilience Index and Marc Bloch's seminal work The Historian's Craft, and experience in the field with the Department of Veterans Affairs, I developed the Data Triangle as a visualization and guide for impact measurement.

It centers on the idea that “truth”, that is, the reality, the truest, most accurate truth of a situation, is not a single point, nor is it immovable and perfect. It is, instead, a space, and that space can move around, depending on the weight and pull of its bounding points. The bounding points are defined as data sets: qualitative, quantitative, and what we termed “informed experience” in the original CVEBs work,1  but has been refined to the term historical data.

While qualitative and quantitative data are well known, the historical data point often gives people pause. I include this point as part of the triangle because there is no contextualization and rationalization of the other two data sets without a serious evaluation of what data has been in use previously and how it was interpreted. Even though those earlier data sets almost certainly will not map to the exact data any team is gathering, and even though the historical data might have been poorly constructed or collected (or both), it is inevitably part of the problem space.2  From that historical data, we are able to gather information on how the space evolved, where it might need to go, how far it might be able to travel, and, to a certain extent, how the other data sets can and should be structured.

The more heavily discussed data sets, qualitative and quantitative, are more straight forward. Quantitative data is easy to count, and innately backwards facing. After all, something has to have happened already for us to be able to count it. Qualitative data, sometimes referred to as “thick” data, has the breadth to encompass desired futures for the people in the problem space, as well as memories, impressions, and patterns that they carry forward from their pasts.

In a fair interplay of these three points, measurement of impact can be approached with some accuracy. Even better, this model can be scaled. From national initiatives to local change makers, thinking about measuring problems spaces as trying to access the complex truth of the situation allows us to integrate a multiplicity of data sets into our measurement, instead of artificially applying flat measurements to a situation we already have identified as multi-faceted.

  1. To see the display of qualitative data in the CVEBs project, please see the Community Veterans Engagement Board project.
  2. To read a case study on how institutional history can float or sink innovative drives, please see the Customer Experience (CX) Journey project. >