The existential threat to our organisations is very real and growing. To be able to survive they must be able to adapt to changes more frequent and profound than ever before.
We are all very aware of these assertions. But it is extremely difficult to imagine and deliver solutions that respond to the root causes of their problems. We know that we need to change something – but what? How can we even begin to articulate the problem when it seems so complex and ever shifting?
This is the first in a series of articles that start to break down the path to data agility into bite-sized chunks. I have also published datagility a practical guidebook you may want to look at that describes how to deliver these chunks in much more detail
Data Simplicity
To survive, our organisations need to be able to adapt with sustainable rapidity. And since data provides their lifeblood, this demands that their data estates must provide them with well-founded data agility. But, we must firstly remove the noise and misconceptions that can cloud our data’s meaning so that we can enable our organisations to adapt rapidly to their changing data demands. This may seem a bit strange because we all believe we already possess a good understanding of data.
Often however, our ideas are inextricably enmeshed with our everyday experiences. The simple truth is that when we think about data, we are probably thinking about reports, dashboards, or an amazing spreadsheet that somehow holds together a significant part of our operations.
Thus our ideas about data are already obscured by representations and technology. As such, they cannot provide us with a firm grip on our organisation’s underlying relationship with its data.
Surprisingly therefore, our current ideas may well act as a hurdle to progress and so we must remove this barrier to allow us to re-define the way our organisation relates to its data.
The first enabling step to change is to establish data simplicity.
Back To Basics
Stripping away the noise surrounding our data reveals its essence and origins; we realise that underlying the data of our organisations are events. In fact, the activities undertaken by organisations rely entirely on events. For example, they register a new client, or receive a payment, or run their payroll.
By analysing the events in our organisations, we notice that they are all associated with data. And more than this, all the events are related to a change in one (defined) data state to another data state. As an example, figure 1 illustrates a workflow fragment where a client has created a request for an account.

Figure 1 – Events and data
It clearly shows the events and the change in data state that they cause.
The data associated with each event is critical so that the organisation can understand what has occurred and can trigger further events in response. This also requires it to have a shareable memory of all the events and hence we must record the data associated with them.
To co-ordinate its activities, an organisation must be able to exchange the meaning associated with each event across its functions. This makes the shared understanding of the data exchanges and flows absolutely critical to support our organisations operations. It also illustrates why data is commonly described as providing the lifeblood of our organisations.
All collective endeavours rely on shared understanding and co-ordination of events, and their associated data changes.
In the brief description of data so far, we have already unearthed its three fundamental characteristics:
- Shared Data Meaning
- Data Flows
- Data Stores
Irrespective of any technology, these are the three fundamental cornerstone data truths of all the events that underpin our organisation’s operations, and therefore, its survival.
This means that these three must underpin the techniques, practices and frameworks that drive our organisation’s successful data-centric transformation.
Organising Our Organisations
To marshal the activities associated with their events, organisations need to execute processes – whether they acknowledge it or not. The level to which these activities are controlled by formal or informal process definitions varies enormously.
Organisations need to be able to respond to events in an organised manner – otherwise they simply aren’t organisations!
Possibly surprisingly, many organisations don’t bother defining processes at all. Their rationale being that “everyone knows what they are doing.” And even for those organisations that do have process definitions, they were often defined several years ago, carefully stored away ‘somewhere’, and were never really applied in any meaningful way.
This may all be acceptable for small organisations and start-ups but can spell disaster for larger ones. In the absence of defining their processes their activities can become episodic, chaotic, ineffective and personality driven.
All these outcomes are anti-agile.

Figure 2 – Operational processes
Defining all the processes and linking them together will construct a business process universe. If this has already been completed, even partially, then we should evaluate it and, if necessary, refresh it. This exercise can also potentially re-invigorate existing activities, allowing our organisations to flourish in neglected areas.
Establishing process definitions can also provide a method to assess the organisation’s overall fitness.
Organisational Fitness
A simple way of measuring an organisation’s current fitness is to compare what it currently does with what it should be doing.
Assessing a process model versus a model of the required business functions, allows us to create a simple fitness visualisation for our organisations.

Figure 3 – Simple organisational fitness model
The three regions in this model are described below.
Functions – Not implemented
The set of business functions that are not currently supported by processes are candidates for our organisation to implement. They represent new areas of operational activity and potentially new opportunities. For example, innovative ways of interacting with clients, or prospects that could improve offerings, or their experiences.
Functions/Processes Overlap
The region where business functions are supported by existing processes represents the execution of required activities and this is good. But we must provide constant feedback data flows for these, to ensure they are as effective and lightweight as possible. We must also use this feedback to learn as an organisation. This will allow us to modify the processes to successfully adapt to an unknown future.
Wasteful Processes
Where we execute processes that have no corresponding business function, these represent redundant activities. We must stop wasting any further precious effort on them.
It is so easy for us to ignore these processes, since we all tend to equate the status quo of our organisations with the steady state of necessity. But in this region, we must constantly ask the ‘Why?’ question of what we currently do. If ignored, the waste of these processes may eventually sound the death knell for part, or all of the organisation, and therefore our dispassionate appraisal is critical.
Adoption and Adaptation
We know our organisation responds to changes in a more measured and methodical manner by executing processes. But let’s spend a moment to consider these in a little more detail.
Imagine that we need to record a client’s details such as their delivery address. This must be executed as a simple low-level process. Initially the definition may be quite vague, but nonetheless as an organisation we must define it. The process definition requires at least one trigger, some simple processing steps, data inputs and defined outputs.
Associated with its data capture requirements would be some data validation rules, such as a mandatory post code and country code. There may well also be a requirement to validate the post code against a domain of post codes for the appropriate country. These simple rules would form part of our process data quality requirements.
However, this process definition is really an idealised model of what the data capture for the process execution should be. In other words, what we expect all executions of it to conform to. We can clearly see the inputs and outputs for it, but how can we monitor whether the process is effective? How do we know that it is operating optimally?
To analyse the process executions, we must monitor them using a feedback data flow as illustrated below.

Figure 4 – Idealised process definitions and feedback
By creating feedback flows, we can monitor and compare the actual outputs from our process execution with the idealised model’s required output. This comparison data will allow us to determine how effective it is.
What this shows is how well-designed feedback loops allow our organisations to measure how well they operate and also provide valuable insights as to how they can constantly improve.
Summary
What we have learned in this blog is that we need to
- Strip away the noise surrounding our data to its simplest level
- Concentrate our efforts on the three data truths of
-
- Shared Data Meaning
- Data Flows
- Data Stores
-
- Ensure we have defined processes and use this universe to optimise what we do
- Use data feedback flows to anayse how well we operate and drive operational and strategic improvements
The next blog in this Data Topic is datagility 2 – data Centricity where we will learn how to become data-centric.
You may also be interested in this Focal Point.
Leave a Reply