Poor quality data is not just a barrier to successful risk screening, it’s a major contributor to excessive compliance costs. An on-demand webinar from Refinitiv highlights the value of a risk-based approach for improving operational efficiency.
- Compliance costs are both visible — such as technology and risk screening data — and invisible through operational costs associated with downstream processes.
- Poor data quality is the biggest barrier to successful compliance screening and, to make matters worse, compliance officers have no control over the quality of the data they use.
- Overscreening — screening beyond regulatory requirements — can also significantly contribute to excessive compliance costs.
Since the early 2000s, financial services firms have been under increasing regulatory scrutiny and many have employed customer screening systems as part of their overarching anti-money laundering (AML) programs.
Alongside this trend, we have witnessed rising headcounts and a slew of hefty fines for those falling afoul of their obligations.
It has become clear that simply adding more staff to comply with AML legislation is not a sustainable strategy, and compliance teams need to therefore ensure that their AML programs are as efficient as possible.
This means managing the total cost of ownership of the customer screening process while not incurring any additional risk.
Excessive compliance costs
Total cost, much like a partially-submerged iceberg, consists of both visible costs, such as technology and risk screening data, and invisible costs, such as the operational costs associated with downstream processes.
Operational costs — including the cost of completing incomplete records or sifting through multiple matches as a result of duplicate data, as well as the associated costs of additional staff, office space, HR costs and the like — are overwhelmingly driving total costs upward.
Poor data quality is the biggest barrier to successful compliance screening, and, to make matters worse, compliance officers have no control over the quality of the data they use, but are held responsible for the consequences of poor compliance.
A prime example is duplication, which can cause ongoing problems.
If duplicate alerts are not collapsed and consolidated to reduce multiple matches for the same individual, compliance teams can waste time and effort screening the same names over and over.
A further problem commonly associated with poor data quality is the inclusion of unnecessary screening names like ‘corporation’. These are no more than noise words that can lead to the generation of false positives.
A third example is non-standardized data, which can cause significant problems. If data is not standardized, for example if dates are configured differently and fed in a different format to that of the screening data, matching results could be missed.
Does your organization have data quality issues?
An online poll asking how respondents would best describe the existing or current condition of the data that they use for compliance and screening purposes revealed that only 35.6 percent were able to confirm that they have no issues with their data.
This means that nearly two-thirds (64.4 percent) either experience difficulties or simply don’t know the quality of their data.
Some red flags that could indicate poor data quality include:
- Continually experiencing difficulties or delays when preparing data for screening.
- The need to keep clearing the same alerts.
- Experiencing challenges when keeping track of incoming data.
- Trouble standardizing and aligning new data sources.
Additional challenges can occur when bringing together datasets with different data nuances during a merger or IT transformation.
If any of these issues are present, it may be time to perform a data analysis in order to identify existing data challenges and understand their impact on your screening program.
What does quality data look like?
Quality data should allow compliance teams to screen against a range of datasets, including sanctions, politically exposed persons (PEPs), law enforcement lists, regulatory lists, adverse media records geared towards sanctions compliance, AML, countering the financing of terrorism, and countering proliferation finance.
Data should also be comprehensive; de-duplicated; consistent; accurate; configurable; and up-to-date.
Given this extensive list of requirements, when choosing a data provider, it is important to choose one that has global capabilities and is able to deliver reliable and trusted data.
The dataset being considered should have a highly analytical set of inclusion criteria and be de-duplicated. A successful provider should allow extensive ‘slicing and dicing’ and configuration and, perhaps most importantly of all, must take the control of operational costs seriously.
A risk-based approach
It is worth mentioning that overscreening – in other words screening beyond regulatory requirements – can significantly contribute to excessive operational costs.
For example, screening all PEPs rather than developing an institutional view of which PEPs pose a realistic risk could constitute overscreening and result in higher-than-necessary operational costs.
It is therefore imperative that compliance professionals adopt the risk-based approach (RBA), in line with accepted best practice.
Implementing the RBA means designing and documenting a screening program that is proportionate to your risks, and instituting a formal governance process for regular reviews and change management.
A combined approach of implementing the RBA in line with your organization’s risk profile and ensuring that your screening data is of the highest quality is a best practice strategy for containing compliance costs, while protecting your organization from compliance risk.
- A Refinitiv webinar — ‘How to manage non-compliance risk while reducing operational costs?’ — is available on demand. It features Max Smith, Product Manager at FinScan, and Michael Meadon and Jacob Thompson of Refinitiv.