Why buying stewardship beats building spreadsheets of pain.
Data may be the new oil, but it rarely gushes from the ground in a form you can feed straight into an engine. Marketdata vendors each publish their own schemas, update cycles and release notes. Every change—no matter how small—ripples through pricing, risk, regulatory and analytics systems. Financial firms quickly discover that the truly scarce resource is not data itself, but the people and processes required to keep data usable.
That’s exactly where a managed vendor data model comes in. Instead of every firm building and maintaining its own translation layer, a specialist provider does the heavy lifting once and allows the cost of that R&D to be mutualised across the whole client base. In this article, we explore:
- The mechanics and effort behind normalising vendor feeds.
- The hidden cost of owning your own data model.
- Why “flexible DIY” can feel liberating at first—but becomes a burden at scale.
- How Gresham Prime EDM strikes the balance between stewardship and client-specific agility.
What exactly is a managed vendor data model?
Think of it as a canonical dictionary of every attribute supplied by every data vendor that matters to the capital markets community. A managed vendor data model:
- Continuously ingests raw vendor formats and maintains the vendor-specific model to allow for lineage
- Normalises and reconciles attributes so like-for-like data can be compared (e.g. coupon vs. cpn vs. rate).
- Version controls changes—from new fields to datatype tweaks—and propagates them downstream without breaking consuming systems.
Normalisation is deceptively hard work. Mapping tens of thousands of inbound fields, maintaining semantic consistency, handling codetable drift, backfilling history, and regression testing are all necessary tasks—just not ones that give an individual bank a competitive edge. Outsourcing them converts a fixed cost base into a variable subscription and frees scarce experts for higher-value analytics.
The hidden expenses of the custom route
Building a proprietary model often feels attractive in the early project phases. You onboard exactly the attributes you need, align them with an existing enterprise schema and deliver visible wins quickly. But three to five years later, most custom shops report the same pain points:
Symptom |
Why it hurts |
Ballooning support teams |
Multiple business analysts (BAs) are needed just to trace dependencies and keep mappings up to date. |
Difficulty keeping up to date |
The incessant need to monitor vendor notifications and make adjustments to custom models, for multiple vendors |
Weeks to add a single field |
Every new attribute triggers impact analysis, code changes, QA cycles and full regression testing. |
Vendor lockin |
Replacing an underperforming data provider becomes a multimonth migration because their proprietary schema is buried deep inside downstream jobs. |
Lost purchasing power |
The inability to switch vendors weakens renewal negotiations. |
Stalled innovation |
Data scientists wait months for new alternative data sets that competitors are already exploiting. |
The lesson is simple: if your model is unique to your firm, you carry 100 % of its lifetime maintenance cost. That is rarely the optimal allocation of capital.
“Blank canvas” EDM vs. curated stewardship
Many Enterprise Data Management vendors pitch the freedom to build whatever model you like and to master only the data you need. That flexibility is real—and valuable—but it puts the entire burden of stewardship back on the client.
In contrast, Gresham Prime EDM invests continuously in a complete, normalised model that already covers the major market data vendors and is updated the moment they change. Clients still enjoy flexibility: you can extend the canonical model with firm-specific attributes or onboard a niche source, without breaking the core contract.
Case study: Global Tierone Bank
A major universal bank selected Prime EDM five years ago to consolidate its market data estate. Today:
- Multiple third-party and internal sources flow through the managed model at production scale.
- The entire BAU and change workload is run by < 1 FTE.
- Onboarding a new source, attribute, or distribution is a routine operational task measured in hours, not sprints.
- Full DevOps pipelines and automated testing guarantee that schema changes ship safely and traceably.
The result is a lower total cost of ownership, faster time to market for new products, and the strategic freedom to renegotiate vendor contracts from a position of strength.
Conclusion
Owning a data model might feel like control; in reality, it often delivers complexity without differentiation. A managed vendor data model lets you outsource the undifferentiated heavy lifting while retaining the ability to innovate where it counts—building new analytics, strategies and customer propositions.
If you’re ready to spend more of your data budget on insight and less on maintenance, let’s talk.
May 16, 2025