After countless conversations with dozens of clients across the financial services sector, I've identified five critical areas that define best practice in data automation and controls. These insights come from real-world implementations, governance discussions, and the collective wisdom of institutions navigating increasingly complex data landscapes.
Our client base naturally segments into two distinct groups, each with fundamentally different approaches to platform configuration.
These clients come to us with a simple but powerful question: "What is industry best practice?" They understand that our flexibility as a platform provider gives us unique visibility across dozens of operations, and they want to leverage that collective intelligence. During monthly governance calls, these clients frequently ask if their configurations mirror other clients and express relief when they discover they're aligned with market practice and their peers: "We thought we were running the same configurations as everyone else." While every deployment has its unique elements to cater to, for example, specific feeds and internal policies, the goal is to keep the core of the configuration as common as possible.
This mindset reflects a sophisticated understanding of operational risk. These organisations recognize that being unique isn't necessarily being better—it's often being vulnerable. They don't want to be the first to encounter a performance issue or regulatory ‘miss’ because they've wandered off the beaten path. Their worry is legitimate: every bespoke configuration introduces potential failure points that haven't been stress-tested across multiple organisations.
The benefit for these clients extends beyond our expertise. When you're running the same configuration as dozens of other institutions, you're essentially getting additional crowd-sourced quality assurance (for free!). If there's a problem with a standard setup, someone else will have discovered it first.
At the opposite end of the spectrum are our multinational giants who have the resources & appetite to be industry leaders rather than followers. They dictate requirements, push the boundaries of what's possible, and demand configurations that reflect their unique operational complexity.
While these implementations are more expensive and complex, they're invaluable learning laboratories. These clients challenge us to innovate and often reveal capabilities we didn't know we needed to build. They're willing to accept the risks of being first because they believe their competitive advantage lies in doing things differently, and are willing to invest the effort to rigorously test their configurations to ensure they will perform in all edge cases even under extreme load.
The pressure on financial institutions to optimise operations has never been greater. With staff costs under scrutiny and efficiency gains essential for survival, the data we collect across our client base becomes a powerful tool for benchmarking and optimisation.
Consider the intelligence hidden in our operational data: buried within are details on which counterparties consistently miss deadlines, which sources are the most consistent in providing complete & accurate data, and which data & processes are most likely to cause reconciliation breaks. This isn't just historical reporting—it's predictive analytics that can reshape how institutions manage their day-to-day operations.
This is another area where SaaS and managed service providers like Gresham can use intelligence from across a wider client-base to deliver insights our clients could not achieve on their own. Gresham’s Pulse Data service acquires, validates, transforms and delivers hundreds of data feeds to our clients. We can identify trends in that data – for example to warn clients ahead of time that we’re detecting a pattern of errors or delays from a provider. We may not be able to fix the underlying issue, but we can give institutions precious time to reprioritize their operations and adapt their workflows.
There are actionable insights in the data behind our Control & Prime EDM solutions too. How do your STP rates compare to similar businesses in your sector? When scrubbing data from multiple sources, which provider do most clients prefer on a per-asset basis? Are some users/workflows more efficient at resolving exceptions? This bird's-eye view of market behaviour is impossible to achieve in isolation but becomes incredibly valuable when aggregated across multiple institutions.
Match Rates: The Foundation of Financial Reconciliation Excellence
Match rates deserve special attention because they're not only fundamental to the integrity of financial operations, but also to the operating efficiency of a business. Achieving optimal match rates therefore simplifies regulatory compliance, decreases risk, increases operational capital and frees up staff who would otherwise be resolving breaks. This is not only good for our clients – it means improved service and increased confidence for their customers.
Poor match rates create a cascade of operational problems. When transactions can’t be reconciled automatically, institutions have to resolve the breaks through expensive manual intervention, which delays settlements and elevates operational risk. Evidencing a robust and reliable reconciliation process is a key factor in demonstrating regulatory compliance, and failures in this area can lead to substantial financial penalties.
From a business perspective, unmatched transactions create uncertainty in financial reporting, which ties up capital that could be working for the business. In today's environment where every basis point of efficiency matters, institutions operating below optimal match rates are essentially bleeding money through preventable operational friction.
By comparing match rates across our client base, institutions can identify what is possible within their sector and narrow down whether their challenges stem from data quality issues, process inefficiencies, or configuration problems. We've seen clients improve match rates by 15-20% simply by understanding how their performance compares to market standards and implementing proven best practices from high-performing peers.
The data also reveals counterparty-specific patterns. Some institutions consistently deliver cleaner, more reconcilable data, while others require additional scrubbing and validation. Armed with this intelligence, our clients can make informed decisions about counterparty relationships and data sourcing strategies.
As AI adoption accelerates across financial services, we're witnessing a critical gap between what artificial intelligence can process and what human intelligence understands about data patterns and operational context.
The ability of all systems, human & AI, is limited to the sum of their learning. Even where they extrapolate and innovate, their actions and expectations are based on their prior knowledge, until they learn from new experiences to condition their future behaviour.
AI systems excel at processing information quickly, but they frequently lack the contextual awareness that experienced operations teams take for granted, and their responses are often generalised. Our human experts inherently know that certain data feeds typically arrive late, that specific counterparties have recurring issues around month-end, or that increased caution is required during volatile market periods.
This intelligence exists as institutional knowledge—patterns learned through experience that aren't documented in any system specification. When organizations implement AI without capturing this human intelligence, they often see impressive initial results that deteriorate as the AI encounters real-world edge cases and exceptional circumstances that weren't anticipated in the training data or properly captured as context in the AI prompt.
The most successful implementations combine AI's processing power and pattern recognition with human situational awareness and experience. Our teams can identify when AI outputs seem inconsistent with expected patterns, catch data quality issues before they propagate through automated processes, and provide the contextual intelligence that keeps automation systems reliable.
This hybrid approach becomes even more valuable when scaled across multiple clients. We can identify AI failure patterns across different implementations, share learnings about effective human-AI collaboration, and help institutions avoid common pitfalls in their automation journeys.
The future of financial data operations lies not in isolated optimization but in collective intelligence. Institutions that recognize the value of shared learning, standardized best practices, and collaborative problem-solving will have significant advantages over those that insist on reinventing every wheel.
Our role as a platform provider gives us unique visibility into what works, what doesn't, and what's emerging as the next generation of best practices. By participating in this collective intelligence—whether as a best practice seeker or a trailblazing innovator—institutions can accelerate their operational evolution while managing the risks inherent in financial data processing.
The organizations that thrive in this environment will be those that balance the confidence to innovate with the wisdom to learn from others' experiences. They'll leverage AI while respecting the irreplaceable value of human intelligence, and they'll measure their success not just in isolation but against the broader market's evolving standards of excellence.
Achieving excellence in financial data automation requires mastering five critical areas: strategic platform configurations that balance innovation with proven practices, intelligent data operability that transforms information into actionable insights, optimal match rates that ensure reconciliation integrity, effective integration of human intelligence with AI systems, and active participation in collective industry learning.
The most successful financial institutions recognize that operational excellence isn't achieved in isolation—it emerges from the intelligent application of shared knowledge, proven best practices, and the courage to innovate where it matters most. By focusing on these five pillars, organizations can build data operations that are not only efficient and compliant but also positioned to adapt and excel as the industry continues to evolve.