‘Garbage in, garbage out’ is a saying often associated with the foods we eat and their consequences, but it applies equally to data.
Data quality is a vital attribute of any data project, as the accuracy, completeness, validity, consistency, uniqueness, timeliness, and fitness for purpose of any dataset ultimately influence the decisions we make.
David Howell, Director of IoT and Digital Transformation at Chugai Europe, shared his insights on this topic during a recent episode of our Behind the Data podcast.
With over 24 years of experience at Chugai, Howell spoke at length about the importance of accurate, timely, and reliable data in driving business decisions and achieving digital transformation goals. Read on for his hard-won wisdom on the subject.
The top three obstacles to data quality
According to Gartner, poor data quality costs organizations an average of USD 12.9 million each year. There are three major roadblocks to data quality in most businesses:
Emerging technologies
Over 65% of businesses are now using artificial intelligence (AI) in some capacity, according to McKinsey’s State of AI report. And the integration of AI and machine learning (ML) with business intelligence will make it increasingly difficult for human practitioners to keep up with big data. As real-time and predictive analytics become more mainstream, there will be far more opportunities for consequential mistakes, and (hopefully) far more emphasis on data quality at the outset. As David says: "Just because you can automate and bring information into a warehouse doesn't mean it's good data.”
Privacy concerns
Privacy and protection laws like GDPR have intensified the need for businesses to store accurate data records. These regulations grant individuals the right to access their personal data, which means organizations require the ability to access and retrieve accurate data quickly. Any discrepancy due to inaccurate or inconsistent data can lead to legal complications and damage to the organization's reputation, which puts added pressure on those accountable for data quality.
Data governance (or lack of)
Data governance practices involve setting internal standards and policies for data collection, storage, and sharing. This ensures consistency, trustworthiness, and compliance with regulations. When data governance is poor, there can be misalignment between systems, resulting in inaccurate, incomplete data that leads to faulty (and potentially harmful) decision-making. A data governance framework should be set firmly in place if data quality is a priority.
Strategies to improve data quality
Improving data quality doesn’t need to be an odious task. A few simple strategies baked into how your organization does things can make a world of difference over the long term. In our recent podcast, David shared some of his favourites:
Establish strict naming conventions
Following a simple yet strict naming convention for data across disparate systems is a powerful way to ensure consistency and traceability. It also reduces the risk of misinterpretations and mistakes between stakeholders.
We've done it relatively simply," David told us, describing how Chugai Europe uses clear and consistent naming conventions to manage data across multiple systems. “We generally use the object name, like CRM.dot.prod.dot.accounts or CRM.dot.prod.dot.contacts, to distinguish where data is coming from.”
Building data pipelines to handle bad data: How to ensure data qualityEnsure data configuration management
Business goals change. New technology can be added to your existing stack. All of this can have an impact on the quality and integrity of your data. David emphasises that the proper documentation and tracking of data sources, transformations, and dependencies is absolutely essential to long-term data quality.
"Configuration documents are important," he says. "I work in IT. I hate documenting stuff. But it is important. Like when you code, you've got to put comments in, because otherwise, you go back to it six months later, and you can't remember what that function does.”
Consider user interface design on customer-facing assets
Doing everything in your power to standardize and structure data on the front end is a powerful way to maintain data quality. “There’s things you can do to make this happen, like make mandatory fields and drop-down lists,” David explained.
However, as AI becomes more integrated into data projects, it might be better to have free-text boxes rather than limit the variation of responses. “ if you've got a free text box, you can glean quite a lot of information from having that text and running it through natural language processing,” he advises.
The support of leadership is the key to data quality success
It’s easy to harp on about data quality, but if no one can see the tangible benefits, not least of all your leadership team, no one will be incentivised to improve things or maintain the momentum required for lasting change.
"Management buy-in is really important because it takes effort,” David says. "It's important to show the benefits of good quality data in driving business decisions.” Only then will data quality initiatives receive the status and resources they deserve.
Upgrade your data integration projects
For more insights on data quality and a look into Chugai Europe’s digital transformation journey, we recommend listening to the full Behind the Data podcast with David Howell.
If you want to know more about how CloverDX can help you improve your data quality, or to schedule a demo, get in touch with us here.