Introduction
Data Quality (DQ), which was always on the agenda of IT managers, but had taken a back seat due to budget constraints or other critical mandates, is now once again a central theme. Owing to regulatory compliance requirements, the push for enterprise risk management, and control of operations' cost, financial institutions are gearing up to put management of data quality back into practice. It is no longer a secondary activity, but an enterprise-wide agenda to achieve data quality maturity through proper governance and deployment of quality measures.
Data Managers are struggling to answer some of the key questions related to their data environment that includes, but are not limited to:
The current responses are common across financial institutions – either they have some answers to these questions but lack confidence, or they recognize that much more needs to be done to achieve expected quality.
Historically, organizations used to benchmark data quality based on certain principles and metrics such as:
These criteria are no longer adequate as even a small percentage of bad data could have a significant negative impact on reporting values or compliance with regulations. It is, thus, crucial to have a clear view of data quality – the ability to drill down to the lowest level of granularity and have well-defined measurement criteria.
This paper talks about the ways financial institutions are preparing tobetter manage data to meet their business needs.
Quality Quandary
The challenges in managing data quality are multi-dimensional and can be attributed to factors such as geographical spread of the organization, magnitude of the IT infrastructure and diversification of the business. Though some financial institutions already have specific solutions to address the issues related to bad data quality, most of them are in fact-finding mode.
Some of the key challenges that the financial services industry are striving to resolve are:
Lack of Transparency
Data governance at most financial institutions is on paper alone and not practiced in the true sense, which has resulted in issues of data controls and data stewardship. 40% of the data managers agree that the biggest challenge is the lack of maturity in data governance, as per the CEB TowerGroup Data Management Systems Technology Analysis.
Manual Adjustments
A majority of the reference data domains are neither fully nor partially adopted as golden copy, which results in data inconsistency among consumers. Furthermore, reference data domains such as product, book and client that require consistent hierarchy/classification and definition across front office, operations and control functions need to be merged to create a single structure.
Ad-hoc Activities
Wealth Management firms and Investment Banks are still maintaining separate data sets, resulting in duplication of efforts and higher costs.
Data Governance is in Flux
Organizations do not have absolute visibility into the current state of their data quality issues, which hinders the creation of a fool-proof roadmap for DQ remediation programs.
Data Inconsistency
Manual adjustments of data are a common practice across the industry. However, the magnitude of manual adjustments represents the quality of data. 38% of the managers feel that frequent manual intervention is required for error-free data. Frequent intervension indicates if less than 75% of the workflow is error-free, as per the CEB TowerGroup Data Management Systems Technology Analysis
Duplication
Most of the organizations have fungible operations budget for unplanned activities such as regulatory changes, data breaks or BCP. Since regulatory compliance is the key focus, it consumes most of the budget and resources. Thereby, most data quality initiatives run in an ad-hoc mode, due to unavailability of fixed budgets.
Major Industry Initiatives
A majority of financial institutions are spending a substantial amount of their operations' budget in reactive data quality management. Lately, managers are recognizing the benefits of automation and realignment of their operational processes as a key element for proactive data quality management.
The intent of proactive data quality management is to reduce human dependency, prevent errors, measure data quality, and provide continuous improvement. Organizations are re-architecting their infrastructure by introducing focused DQ technology themes such as workflow management, rules management, exception reporting, data entry control, and reconciliation tools during the data processing cycle. This framework works more as a watchdog to pinpoint and report quality issues rather than as a remediation tool. As a result, financial institutions are able to bring down sizable operational costs and increased quality with implementation of each technology theme.
While the industry is looking forward to implementing sustainable and strategic enterprise data quality eco-systems, most financial institutions are performing due diligence by analyzing existing data sets and infrastructure before taking up larger initiatives. The objective is to develop an inventory of systems, processes and issues to assess the present day data quality and identify break points in order to develop a quality remediation roadmap.
Some of the initiatives and strategies that financial institutions are taking up include:
Clean-up of Legacy Data: Legacy reference data is one of the chronic issues that organizations have been dealing with for years. No data quality initiative will reap any value without clean data. The Legal Entity Identifier (LEI) regulatory requirements were one of the primary trigger points for the industry to start recent clean-up initiatives. Among all reference data domains, clean-up of client data is on top of the agenda due to the complexities associated with it. Various data profiling tools, armed with exhaustive data dictionaries, are being used to analyze the present state, track the lineage and do fuzzy matching for remediation. With the right selection of tools and technology, these complex clean-up eorts can be significantly reduced.
Keeping a Single Reference Data Hub:The current process of sourcing and creation of data is one of the origins of bad reference data. Owing to the limitations of legacy systems, federated architectures, multiple data vendor licenses and sources, most reference data domains are not one hundred percent golden. There are initiatives to create “one-stop shops” for reference data that reduce costs and avoid downstream anomalies.
Efforts are on to build a Golden Copy of data and downstream systems that subscribe to the hub. Besides instrument and client data, the focus is also on book hierarchy, legal agreements, and product taxonomy to bring coherence among finance, risk, front office and operations. Financial Institutions are drawing multi-year roadmaps to create common reference data repositories and simplify the data distribution process.
Thus, there is a clear business case for the single hub program that helps to reduce maintenance and licensing costs, avoid data irregularities, and gain users' confidence.
Greater Integration of Ops and IT: Data operations and IT never work in tandem due to the distinctive work profiles, separate skill-sets and, most importantly, the data secrecy policies. Analysis, remediation and stewardship require a stronger handshake between operations and IT.
Industry players are moving towards a “one team” model to leverage skills from both worlds. Though it is understood that one cannot cross over to the others' area, there are common activities such as application and data testing, helpdesk support, release and minor works, and Keeping the Lights On (KTLO) that can be executed by a combined team.
While, this model has been successful for small-to-medium sized financial institutions, larger organizations are in the process of setting up cross-functional teams to lead these initiatives.
Downstream Control Points: Even when a single hub is adopted as the strategic reference data golden source, it is hard to manage clean data downstream due to several complicating factors such as improper mapping of data attributes, issues with the transformation layer or erroneous manual adjustments. In order to hedge this risk, organizations deploy similar Data Quality measures both for upstream systems as well as downstream systems. Continuous reconciliation with upstream systems and playback of the derived/adjusted data to the source are some of the components crucial to the process. A significant amount of quality improvement has been observed with these measures.
Conclusion
The benefits of data quality management are distinctively visible and measurable. Legacy data clean-up could save more than 20% of operational costs and reduce substantial operational risk. Cleaner data helps in establishing the relationship between multiple entities. This opens up avenues to save costs by cross-leveraging data between Investment Banks and Wealth Management firms that is 30-40% common for client data and more than 90% common for product data.
Automation of data processing leads to lower errors and increased quality lead to a potential reduction in operations cost by 40-60%. Further, single data hub, unified processes, and enterprise licenses eliminate group-wise management and acquisition costs, resulting in major budgetary savings. Over and above the quantitative benefits, user confidence on quality is the key for a successful data initiative.
The process of creating a data quality roadmap is an opportunity for the industry to relook at its data architecture for IT simplification, prepare for future requirements, align with industry trends/initiatives, move to newer technologies and rewrite operational processes. An enterprise-wide data quality initiative is an occasion to bring internal business groups together to address their data quality issues and agree on a policy to use data from a single source, raising their convenience and confidence factor for the greater interest of the organization.
Sukant Paikray
Sukant Paikray brings more than 23 years of experience and heads the data management practice for the Securities and Capital Markets business at Wipro Limited. He has worked with the financial services industry for around 11 years and, in 2002, moved into the technology industry to leverage his expertise in various capital market domains. Prior to joining Wipro, Sukant was a domain leader with Oracle Financial Software Services. Sukant can be reached at: sukant.paikray@wipro.com