Market reference data provides descriptive and economic information needed to complete and settle financial transactions for securities. It gets shared across different business areas and stakeholders and provides the context for specific transactions and/or client or in-house commercial activities. At its most basic level, reference data, for a simple trade transaction such as the sale of a stock, involves referencing to standard identifiers (e.g. ISIN) for the underlying security, the buyer, the broker-dealer(s) or execution venue and the price. At its most complex level, reference data can cover all relevant details for the most complex transactions and products, including numerous terms and conditions, different entities, contingencies and dependencies.
This data is surprisingly ubiquitous and complex. It might look simple but describing all of the economics and terms of a product can involve thousands of groups and data attributes per product. At its most basic level, reference data for a simple trade transaction, such as the sale of a stock, involves referencing to standard identifiers (e.g. ISIN) for the underlying security, the buyer, the broker-dealer(s) or execution venue and the price.
Over the past few years, the financial services industry and regulatory agencies have been pursuing a policy of standardising reference data that define and describe trade transactions. However, standardising the reference data has not been straightforward for institutions due to a number of factors:
Financial institutions are grappling with significant challenges in reference data management due to their convoluted, siloed approach based on product lines, customer segments, and geographies, often compounded by past mergers and acquisitions. This fragmentation has resulted in a costly and opaque "spaghetti" architecture characterised by multiple data sources, legacy systems, undocumented connectors, and redundant infrastructure that inflates maintenance costs and creates unpredictable change cycles.
The situation prevents effective standardisation as consumers create local copies of fundamental data elements, with 20-50% of database tables containing reference data that becomes inconsistent across applications. This proliferation of customised copies forces stakeholders—particularly in risk and finance departments—to navigate across disparate local datasets when aggregating information, leading to data quality issues, reporting errors, and significant time spent on remediation rather than value-adding activities.
Organisations are facing multiple urgent issues, too:
An industry-wide exercise has begun to reassess the current market data offerings and changes provided by vendors to drive new business and technological value.
A shift of focus and alignment of governance is needed. Specifically:
There’s also a need for infrastructure improvements:
Financial service firms are experiencing unprecedented margin pressure while being constrained by legacy infrastructure from early automation efforts. Their fragmented data management approach results in high fixed costs and complex change cycles, leaving them ill-equipped to leverage new analytics technologies, adequately serve business users, or meet regulatory requirements.
To address these challenges, organisations need
This structured approach enables financial institutions to develop a strategic solution that addresses current pain points while positioning them for future innovation and competitive advantage.