The foundation for a strong Data Quality Management Practice


The foundation for a strong Data Quality Management Practice

Unfolding the different layers of Data Quality Management

Having addressed the momentum in the banking sector, including external and internal drivers, many organisations struggle with translating that urgency into a practical implementation of its Data Quality Management. How to lay the foundation for a proper Data Quality Management practice? How to define the governance and set the data requirements? That is the topic of this blog - the second of a three-blog series.

The quantity and complexity of regulatory risk metrics that banks have to monitor and report, has increased significantly. Underlying these risk metrics is the organisation’s data that needs to be managed in order to adhere to the regulatory requirements. Data Quality Management might seem like a black box, so this blog aims to unfold some of the complexity. You can read our introductory blog about Data Quality Management here.

To achieve this goal, the Data Quality Management process has been laid out in a six-step process outlining the end-to-end approach for setting requirements, locating and resourcing the right data elements and continuously checking data quality, while embedding and positioning data quality in the business.

The first step defines the data governance from the data owner to the end user, including roles and responsibilities. This is followed by the identification of the Critical Data Elements (CDE) and the business requirements that are identified through, among others, a risk analysis process. The third step is to design business and technical data quality rules in order to monitor these CDEs and business requirements on data quality. Fourth, the desired threshold is assigned in accordance with the business requirements. The fifth step is to implement the data quality rules in the IT process and create a dashboard to validate and monitor the results. The last step is to continue to actively monitor against the requirements and resolve data quality issues. In this blog we will focus on the first two steps.

Step one: Positioning Data Quality Management in the organisation and defining the governance

The core purpose of solid Data Quality Management is to ensure that the data is fit for purpose and to meet business needs. Hence, it is a continuous process of planning, implementation and control of activities that apply defining parameters for specific acceptable levels of data quality in order to ensure reliability and trustworthiness.
Zooming in further on the purpose of high quality data, there is (among others) the consumption of data for regulatory reporting purposes. The data model is used to identify the quality that the data needs to adhere to. Therefore, you will find Data Quality Management mainly positioned at the beginning of the value chain - this will allow the business to set requirements and the inherent business rules. Businesses also identify their data points which they need for e.g. regulatory reporting data models, also known as Critical Data Elements.

When the positioning of the Data Quality Management practice in the organisation is clear, the first step is to set up Data Governance within the organisation. A clear governance structure identifies principles, roles and responsibilities and guidelines towards data management. A translation of this governance into operational processes is required to functionally implement Data Quality Management into an organisation. This will allow the organization to identify, monitor and remediate data quality issues. The governance describes the rules and responsibilities from the end user to the data owner and is positioned over the entire end-to-end data lineage.

Step two: Identifying Critical Data Elements and requirement setting

Applying a focused Data Quality Management implies that the organisation must identify its Critical Data Elements. These elements require proper definitions to successfully structure and aggregate the data according to the business needs and the set data quality standards. Critical Data Elements can be selected by means of multiple methods, underlined by an appropriate risk analysis. Below we list four of them:

  1. CDEs can be determined by means of risk appetite statements that outline the CDEs with the highest risks involved.
  2. Each year the supervisor announces which themes will be key in their inquiries. CDEs can be defined based on the requirements of the supervisor and regulatory reports.
  3. CDEs can be issue driven where they are determined based on current incidents with the largest material impact.
  4. Internal audit can translate internal strategic priorities into important CDEs.

In the third blog we will discuss in more detail how to translate data requirements into data quality rules and data quality monitoring. 

Did you find this useful?