It’s over a decade and a half since the publication of the original OCC 2000-16 Risk Bulletin on Model Validation, and half a decade since the issue of the much expanded supervisory guidance co-authored by the OCC and the Federal Reserve.
Along with Basel II, and the CCAR regulations (part of the Dodd-Frank Act), one of the effects of these extensive pieces of legislation has been to greatly increase the degree of responsibility – and scrutiny – placed on banks’ independent model risk functions.
But as our collaborator and financial crime expert Keith Furst recently discussed at an executive dinner event, financial institutions are still a long way from solving their model risk management challenges. We continue to see multi-billion dollar fines being levied for infractions which stem from inadequate risk management systems and processes. What is going wrong?
A Manual Process
Risk management begins with understanding the type of data being handled, and how it moves through systems. Data and metadata management has long been a core challenge for financial institutions, in part due to the continuing use of manual tools such as spreadsheets to manage metadata and track business critical processes such as reconciliation.
This lack of automation makes scaling impossible, because if metadata management requires so much manual effort, there is always the risk of introducing human error, and it is extremely unlikely that all data will be tracked in a consistent and reproducible way.
It’s Not “one and done”
Unfortunately, Model Risk Validation is often simply seen as the task of backtesting completed systems rather than an embedded, iterative process which begins on the architectural drawing board and endures for the whole lifetime of any given system.
We often witness systems where the only form of ongoing Model Risk Validation is a once a year “tuning” – which is clearly woefully inadequate in an increasingly real-time world. Machine learning and artificial intelligence is evolving at a rapid rate, and the risk management domain offers an excellent use case (e.g. substantial training data, the need for evolving algorithms, and a second line of human validation to provide feedback), but the application of these technologies within financial risk management is still extremely limited.
One Man, Two Governors
One of the most difficult aspects of implementing an effective data governance program is the fact that although development teams enable and must ultimately implement the required business logic, data governance is not an initiative which can - or should - be driven by developers alone. But the very fact that the responsibility for model risk validation spans multiple independent teams, creates problems.
Analysts must take external regulation and business initiatives and translate them to design business logic for developers to implement. However, most financial institutions are not able to support robust model validation efforts due to a lack of skillsets within validation teams. Candidates with the ideal skillset - those who can understand both regulatory logic, and analyze the resulting code in order to effectively challenge implemented models - are extremely rare.
And so, thinly staffed validation teams, responsible for too many complex models, as well as a lack of tools which offer an interface adapted to both technical AND business oriented users, means that there is often a deep chasm between teams which is difficult to overcome.
The Bottom Line
Furthermore, in recent years, Model Risk Validation has been somewhat repositioned from being part of risk management, to sitting under compliance. Of course it’s understandable that this should be the case, but unfortunately this means that the entire function is often considered a net cost to the business, rather than a potential source of competitive advantage. Whilst regulatory requirements are still the key driving force for model risk validation initiatives, it is likely that progress will continue to be slow.
Model Risk Validation is not simply about addressing the relevant regulatory requirements at the design stage. It is a complex, iterative process which impacts the shape of teams and their development processes. Until financial services businesses are able – and willing – to address the problems which touch upon skillsets, technology, and problem ownership, we will continue to see costly mistakes being made, and missed opportunities for improvement.