• ALL
  • CATEGORIES

Ways to start thinking about your data

Thinking about your data

Regardless of an organisation’s size, sector or turnover – data matters. But knowing where to start with your data strategy can feel overwhelming. It’s one thing to realise data’s importance, and quite another to use data strategically.

Why is data important? Well you just have to look at the last five years, in which 90% of all the data we have today has been created. This shows you just how much of what we consume today rests on data.

You can think of data almost like a type of fuel. Companies use it to understand actions and behaviours, before converting that understanding into actionable decisions which shape products and services.

By 2025, the International Data Corporation (IDC) thinks the total amount of digital data created worldwide will rise to 163 zettabytes.

To give you an idea of size, if each gigabyte in a zettabyte were a brick, 258 Great Walls of China could be built.

So data is huge, it’s everywhere and it’s fast becoming a currency far more valuable than the pound or the dollar. To get the most out of it, it has to be managed and used like an asset. 

This can make the task of managing data quickly become complex and daunting. That’s why we’ve put together a short list of considerations, based on our experience, that break the task down into manageable chunks to provide clear areas of focus and priority.

 

Map your data

If you know where your data sits, then you’ll undoubtedly save money. This is because it’s far more expensive to recover lost data than it is to file data in the first place. According to US non-profit AIIM, organisations spend $20 to file a document, but a much larger $120 to locate a misfiled document.

Knowing where your data is means people can use it. This allows leaders in an organisation to make decisions using informed, bottom-up insights. With management onboard, an organisation can live and breathe the data strategy it puts in place. In an ideal world, you want your whole board to understand and commit to the costs and returns that better access to your data can bring.

By understanding where your data sits it will allow your team to recognise patterns, helping you avoid an abundance of misfield or incorrect data. To classify this data, you need to understand three key things: How sensitive is this data, and what are the consequences if it’s compromised? What ‘type’ of data is it? And how much is that data worth to your organisation?

 

Separating data strategy from physical systems

Whilst it is easy to think of data just in terms of physical systems, you also need to consider the ‘logical’ architecture and the creators and users of the data. 

To do this, organisations need to be able to prise themselves away from the specifics of a cloud vendor and headache-inducing platform issues and think more about capabilities.

These might be unearthing new data insights, finding new data sets, or maximising the relationships between different data pools. If you can categorise these capabilities and get a view of what you have now, where there is duplication and gaps, you can use this as an input for developing your data strategy.

Limiting your focus to physical architecture can lead to costly issues later down the road. Understanding your high-level data needs from the get go will help you to avoid this, and ensure that any migrations you commit to are time and cost efficient from start to finish. 

Using stakeholder, marketplace and user research, and mapping these to capabilities, you can then carry out specific product reviews to see which platforms or products help you deliver these capabilities most effectively.

Ultimately, you always want to be thinking about logical and physical approaches to data side by side. 

 

Understanding your different data flows

When you look at your data, you need to see what it is and what it could be. This means building a strategy and plan for both the now, and the future.

To do this, you need to segment data flows, and there are many ways you could approach this. You could, for example, simply categorise data flows by ‘internal’ and ‘external’. Or, you could go deeper by looking at data from the perspective of an end to end service. That way, you can look at what data is created internally and externally in each step of a service being executed.

You could start thinking about data flows from scratch – thinking about the information you’d love to have to run your organisation more effectively and then look at how you might derive it.

Whichever approach you choose, you also need to identify what these flows are, or will be, used for. For example, you might already have accessible dashboards for funders and partners, but you may want to implement a new data API with lower latency and more automation. 

Whilst one will be underpinned by an existing data flow (which could involve lots of manual intervention), the new platform will need new data processes to run on – and it may even need to plug into existing data flows too.

 

Governance of your data without bottlenecking the system

Centralising data can be important. It’s the ability to collect your data from many inputs but store, maintain and report on it from one location, whilst accessing it from many points. It is an admirable goal, but in today’s fast moving organisations and marketplace it can also be hugely unrealistic.

It’s natural to desire one source of truth and lock the data down, but this encourages people to create their own solutions which can be hidden from the organisation. So a balance of principles, tools, approaches and support for people wanting to maximise their use of data is needed.

Over-engineered solutions can lead to bottlenecks on a service. So, it’s important to centralise standards, controls and security, whilst also resisting an over authoritarian approach. 

The outlook needs to be positive and support change. It needs to operate a model for bringing in new solutions, whilst also having common patterns that can be reused. This means you can make the most of the different data sets you have and unearth the insight which is vital for sound decision making.

 

Minimising your data

Data breaches show the world just how much of people’s information is not under their control. Credit bureau Equifax’s data breach – one of the largest breaches history has ever seen – compromised private records of more than 160 million people back in 2017.

What breaches like this highlight is the need for organisations to approach data collection with a minimalist mindset. Before any data is collected, firms need to ask why they need the information they intend to collect. How will knowing that help the people the organisation serves better achieve their goals? Then organisations can consider how knowing that will better serve their internal purposes. 

Whilst this approach may sound obvious, many firms still follow blind data-harvesting strategies which have no clear purpose or benefit. By only collecting the bare minimum amount of data you require, and by identifying levels of sensitivity in line with regulations like GDPR, firms can avoid the impact of breaches on the scale of Equifax’s.

This approach feeds into data lifecycle management (DLM). This is a process whereby you optimise the useful life of data, and so you only store data for as long as it is useful to your organisation – after which you should delete it.

Minimising data also helps the environment. The world’s digital carbon footprint stands at around 1.4%, putting it on a par with the aviation industry. So, keeping data collection to an essential minimum can help curb this impact.

 

Skilling up your workforce

Becoming a data-driven organisation if you aren’t one already will inevitably require you to upskill your staff. This comes with any new major programme.

And you’re not alone in this need for change. A recent report by business intelligence firm ThoughtSpot found only a fifth of the world’s workforce feels “truly empowered” and digitally equipped. The report said this minority fifth were enjoying better customer experiences, and weren’t weighed down by the traditional business silos which typically hinder true agility and transformation.

Part of this means ensuring your data team doesn’t feel like it’s drowning in undifferentiated data wrangling tasks. This is where remote process automation (RPA) can come in. 

By creating a ‘heat map’, organisations can highlight which areas are ripe for RPA, and which still need to remain partially – or wholly – manual.

For more on how to upskill your team using RPA, check out our charity RPA guide – Making an impact with automation.

 

How we can help

From testing cookie policies, to challenging what data is really needed, to using tools such as data studios to unearth the insights locked in data sets, we’re helping our clients towards better data health. If you’d like help with your approach, get in touch with our team who can advise the steps required to improve your data for your organisation, and your users.

Leave a reply

You can use these tags:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

Sign up for the Manifesto newsletter and exclusive event invites