ADNia joins Cofomo: an alliance to build the future of digital transformation.

Learn more

Data quality: a strategic challenge for organizations

5 minutes
Data quality: a strategic challenge for organizations

Nicolas Gidas

Architect

Data quality: a strategic challenge for organizations
Published : 3 July 2025
  • Governance
  • Article
Share

Data: an information product

Let’s take a moment to compare data quality to the quality of a manufactured product. For example, when we manufacture a cell phone or a car, we set quality standards based on user expectations (autonomy, safety, durability, design) but also in compliance with rigorous industrial standards, current regulations and industry best practices. These standards ensure that the product is reliable, efficient and safe. Quality control processes are implemented throughout the production chain to detect and correct defects before the product reaches the consumer. Without these measures, the risk of recalls, incidents, complaints or loss of reputation increases considerably.

The same logic applies to data: these are information products which must meet precise quality requirements depending on their use. If an organization fails to establish clear standards and adequate controls, it exposes itself to strategic errors, biased decisions and a loss of stakeholder confidence.

Data quality refers to the set of characteristics that determine the extent to which data can be used reliably, efficiently and in line with the organization’s objectives. Quality data is data that is accurate, complete, relevant, consistent, timely and easily accessible.

 

Data has become an asset as valuable as it is strategic. It is a cornerstone of digital transformation, because automated processes, algorithms and analytical applications can only deliver relevant results based on reliable data. Conversely, the accumulation of unreliable or poorly structured data constitutes an informational debt that is difficult to absorb. Strengthening data quality is therefore a strategic lever for improving an organization’s overall performance and operational resilience.

 

This concern is confirmed by recent studies. For example, the annual “BARC Data, BI and Analytics Trend Monitor” survey reveals that data quality management ranks second among organizations’ priorities, just after data security and confidentiality. This illustrates the extent to which companies today recognize the strategic value of reliable data, and the urgency of addressing data quality issues as major organizational challenges.

 

Quality is also a guarantee of reliability for external stakeholders: partners, investors, regulatory authorities, or even citizens in the public sector. It is a cornerstone of digital transformation, because automated processes, algorithms and analytical applications can only deliver relevant results based on reliable data. As systems become more interconnected and value chains increasingly data-driven, the risk of error amplification increases in the event of faulty data.

 

Examples of the impact of poor data quality

Operations : In a public administration, poor data quality can have concrete impacts on citizen services. For example, if civil status databases contain errors in addresses or marital status, this can lead to official documents being sent to the wrong people, delays in the processing of social welfare applications, or errors in the allocation of entitlements. These malfunctions undermine public confidence, increase administrative costs and expose institutions to public criticism and costly corrective action.

 

Business intelligence: Imagine a strategic dashboard displaying performance indicators based on incorrect sales data. Decision-makers might believe that certain products are selling better than they actually are, leading to misdirected sales, misdirected investments or unjustified stock-outs. In addition, budget forecasts, profitability analyses and action plans lose their value and relevance.

 

Artificial intelligence: A recommendation system or automated decision model fed by biased or incomplete data risks producing results that are discriminatory, inefficient or contrary to the organization’s objectives. For example, in the banking sector, an algorithm for granting lines of credit based on biased historical data could unfairly disadvantage certain customer profiles. Similarly, in the medical field, predictive models trained on inaccurate or irrelevant data may produce erroneous diagnoses.

The main dimensions of data quality

Data quality is measured along several dimensions, the main ones being :

  • Accuracy: Data must faithfully reflect reality. Example: A customer’s zip code must correspond to his actual geographical address.
  • Completeness: All necessary information must be present. Example: Each customer record contains a telephone number, an e-mail address and a unique identifier.
  • Uniqueness: Each entity must be represented only once in the data systems, with no duplicates. Example: The same customer should not appear twice in the database with different names or addresses. This avoids redundant communications and analysis errors.
  • Compliance: Data must comply with pre-established rules, formats and standards. Example: Dates are entered in YYYY-MM-DD format in all systems.
  • Timeliness: Data must be available at the right time and reflect the current situation. Example: Stocks displayed in the system are updated daily and reflect recent sales.
  • Relevance: Data must be useful and appropriate to the context in which it is used. Example: A sales report targeting young adults should only include data relevant to this age group, not general demographic information.
  • Consistency: There should be no contradictions between data from different sources. Example: Monthly sales of the same product are identical in finance and sales reports.
  • Reliability: The source and methodology of data collection must be trustworthy. Example: Customer data comes directly from an official form filled in by the customer.
  • Data access: Authorized users must be able to access data easily. Example: Marketing teams can consult segmented socio-demographic data on a self-service basis, via a secure platform.

 

It is essential that these dimensions are assessed at regular intervals as part of governance processes, using quantitative indicators (e.g. completeness rate, update frequency) and qualitative indicators (user feedback, ad hoc audits).

ISO 8000 data quality standard

ISO 8000 is an international standard specifically dedicated to data quality. It provides guidelines for defining, managing, evaluating and improving data quality throughout its lifecycle. It is built around principles such as data portability, comprehensibility, traceability and verifiability. ISO 8000 also encourages the establishment of a governance framework to ensure clear accountability for data quality.

 

ISO 8000 also emphasizes the interoperability of information systems, the ability to document the origin of data (data lineage), and to ensure data quality independently of its technical environment.

Steps to improve data quality

A fundamental element of this approach is the implementation of a data governance framework. This structures quality management by clearly defining the roles and responsibilities of the various stakeholders (data holders, data stewards, managers, users, technical teams, etc.). It facilitates coordination, ensures shared responsibility and supports consistent practices across the organization.

 

Implementing a continuous improvement approach to data quality involves several key steps:

  1. Defining needs: Identifying the main uses of data, users and their expectations. This also involves aligning expected quality with the organization’s strategic objectives.
  2. Assess quality: Measure current data quality using KPIs (e.g. completeness rate, error rate). These indicators can be visualized via automated dashboards to facilitate tracking over time.
  3. Assess the impact of non-quality: Quantify the costs and risks associated with poor-quality data. This may include financial losses, wasted time, contractual disputes or regulatory sanctions.
  4. Identify the causes: Identify the sources of errors (e.g. obsolete systems, non-standardized processes, insufficient training). Business process mapping can help locate potential sources of error.
  5. Develop an improvement plan: prioritize corrective actions and define clear objectives. An iterative, Agile approach is recommended.
  6. Correct errors: Clean up existing data (remove duplicates, standardize formats) and implement automatic or semi-automatic correction mechanisms.
  7. Set up control mechanisms: Create ongoing control and governance processes to maintain quality over time. This can include validation rules at input, regular audits, and responsibilities assigned to Data Stewards.

Conclusion

In a context of accelerated digital transformation, data quality is no longer an option, but an essential prerequisite for performance, innovation and organizational agility. Adopting a reactive approach – correcting errors only when they are pointed out by end-users – severely limits the ability to innovate and hampers organizational development.

Beyond operational efficiency, data quality is also becoming a factor of trust. In a digital environment where transparency, accountability and ethics are becoming increasingly important, having reliable, well-governed data is a prerequisite for preserving the legitimacy, credibility and trust of customers and citizens.

With ADNia, explore new perspectives to take your data — and your impact — even further.

Keep exploring with insights, analyses, and best practices on the same topic.

Flèche
Learn more
Flèche