In its near decade of life, the Consolidated Audit Trail (CAT) has shaken out as something like fintech‘ s version of Brexit—except it’s lasted even longer. Born of a legitimate question—in this case, the post-crisis (and then post-2012 Flash Crash) requirement for US regulators to assess market risk in equities and options trading far closer to real-time—the CAT’s development has proven an expensive and convoluted process, one that is seemingly interminable, sometimes with a lack of clarity bordering on maddening, and a favorite subject of intrigue among industry press. Perhaps besides the Big Four consulting giants, the CAT’s biggest beneficiary so far has been punny headline writers.
And yet, like Brexit, that journey finally appears to be nearing its end—or at least the end of the beginning—as technical specifications were defined earlier this year. It appears current deadlines will no longer be postponed, with the second-phase majority of broker-dealers and industry participants coming in scope by November 2019, and simple-options reporting starting in 2020. The CAT is becoming a reality.
The question is: what now? The reality is that the snail’s pace and mounting delays around the process have benefited many; now that it’s done and dusted, they must react. Some will be wondering if, in the interregnum, they have done enough at an enterprise level to cope. Others may be asking how to even start.
A Changed Mentality
Many of the challenges have been exposed throughout the construction of the CAT processor and comment periods leading up to the technical spec—but in fact, from a Gresham perspective, we would argue it goes even further back than that. To look at it, initial CAT requirements are essentially FINRA’s predecessor Order Audit Trail System (OATS) on steroids, and lessons from OATS and processing of Electronic Blue Sheets (EBS) are crucial to coping with CAT’s first phase effectively.
Client implementations in recent years focusing on these areas already serve to highlight the slow, manual, and clunky nature of data sourcing, integration, and compilation that hampers internal efforts around trade capture. Some of the issues, in other words, aren’t new—just getting worse. CAT expanding those to include order events (not just execution), customer personally-identifying information (PII) data enrichment, metadata production, and of course an entire new cadre of options markets makes the lift only heavier. A band-aid mentality is clearly insufficient.
To their credit, some tier-one banks’ technology teams have recognized the opportunity presented by CAT to initiate enterprise-level data infrastructure refresh—for instance, by building or reengineering trade repositories that simultaneously incorporate trade and client information—in an effort to unwind some of the process complexity involved, and to reduce the cost of managing CAT into the long term. Others are retrofitting their OATS data estate. But for most industry participants, that internal work is ongoing (or just starting) rather than completed and operational. And in any case, there is little avoiding the extensive vendor partnerships involved to achieve successful CAT data gathering and reporting. So how to develop those the right way - and on deadline?
Certainly, the first mistake some have made is to look for a comprehensive “holy grail” solution that is also best-of-breed and can be implemented quickly. One doesn’t exist—and that shouldn’t surprise anyone, given the disparate systems, layers of new transformation logic and normalization processes required, and data validation and controls in play with the CAT. Instead, firms looking at tactical implementation ahead of the November deadline should seek expertise in each of these pockets of the process; the priority is not only timeliness-to-market and data accuracy, but systems flexibility and integration within the larger whole. They’ll never achieve CAT’s strategic objectives—improved data quality and deeper order and transaction analysis—without that foundation.
The other natural trouble comes from placing outsized emphasis on the end-product, the reporting package, and not enough on what comes before. We’ve seen both in OATS context and other trade reporting compliance that platform providers—for example, order management system (OMS) vendors—will toss in report production as an add-on. Clients find out much later on (or worse, under audit) that assumptions about accuracy were wrong. Of course, this fix involves finding a better reporting provider; however, much of the spirit of CAT (and for that matter, global regulatory initiatives like BCBS 239) is about looking underneath, with improved dexterity around data governance and controls.
No matter the reporting engine provider, faulty logic further upstream will occasionally timestamp a batch of trades to the wrong day, or output data in an incorrect format. The idea is to have pre-validation controls in place to catch those aberrations—well before the data is a step away from the regulator—without added processing time. That matching element isn’t about populating reporting fields, or ticking a box; it’s about data integrity.
Why It Matters
Given the circus around CAT to date, it’s hard to imagine the drama has ended. Even firms that have been out in front of the project will, without doubt, suffer some early pain in the coming year. But now that certainty is growing, it is important to not lose sight of why all of this matters. In a market as fragmented as US equities, being able to connect trade dots under pressure can mean the difference between a circuit-breaker pause or a crash; between a blip or a total loss of liquidity; between fair or unfair as things are unwound after a market disruption..
It will be tough, and it will take patience and investment—but achieving these important ends starts with industry participants’ digging deep into their trade data infrastructure, taking an honest approach, and optimizing each leg of the CAT compliance process. Catastrophe averted.
Jan's original post can be found on LinkedIn.