Slide 1

Professor Michael Mainelli, Executive Chairman, The Z/Yen Group

[An edited version of this article first appeared as "Derivative Processing Counts", Journal of Risk Finance, The Michael Mainelli Column, Volume 9, Number 1, Emerald Group Publishing Limited (January 2008), pages 92-95.]

Present Processing Imperfect

The global ‘credit crunch’ of 2007 has been widely reported and discussed.  Something less discussed is a ‘dog that didn’t bark’ story – the absence of credit market operational failure.  Operational failures accompany many financial crises, for instance, Barings, Daiwa, Enron, Long Term Capital Management and Kidder Peabody came to light during periods of wider financial market problems.  Market stresses tend to crack operational nuts.  So where is the accompanying credit market operational disaster?

At first glance, while there have been worries, and it has been neither smooth nor easy, things are under control.  During the first half of the decade, unprecedented growth of the credit markets led to concerns about the security of settlement, particularly with a growing backlog of “unprocessed” credit default swap (CDS) contracts.  Credit market backlog is typically measured by the number of outstanding confirmations exceeding 30 days.  During 2005 outstanding CDS confirmations exceeding 30 days were nearly 100,000 while deals being made every month were just under 150,000.  In 2005 the Federal Reserve obtained commitment from 14 major dealers to upgrade their systems and reduce the backlog.  In January 2006 the dealers claimed that they had met their commitment and achieved a 54% reduction in outstanding confirmations exceeding 30 days.  From January 2006 to February 2007 the dealers did even better, such that confirmations exceeding 30 days were down to less than 10,000 in autumn/winter 2006/2007, with monthly trading volumes exceeding 150,000.

But in 2007 volumes rose as concerns over credit markets led to increased trading, from around 150,000 deals per month to over 400,000.  As volumes rose so did the backlog, to just under 30,000 in July 2007.  One can note that the backlog is fairly steady as a proportion of monthly trading at around 7%.  Perhaps this is not so bad as the derivatives processing factories are handling a significant, and unexpected, volume spike in a linear way.  However, 30,000 outstanding confirmations over 30 days can hide a lot of surprises.

Looking to the future, we can expect to see volumes increase even further while average deal size and fees decline.  Volumes will rise as global markets incorporate the growing economies of China, India, Russia and Brasil.  New products in commodities, carbon, weather, insurance and betting will increase derivative product ranges and complexity.  The front office has really only just started to explore the frontiers of innovation, while the back office remains Dickensian.  The cost of processing derivatives will rise as a proportion of profit, and margins will decline.  Of course, if the industry can’t sort out credit derivative processing, then regulatory interest will grow not just in the role of credit rating agencies in the recent credit crunch, or threats to Basel 2 as regulators ponder the now-less-attractive relationships among ratings, models and capital requirements.  One can expect regulators to set more demanding leverage, margin and settlement requirements.  Something has to give.

What We Seem To Have Here Is A Failure To Learn

Further, the increases in trade volume and changes in cost-per-trade since 2004 question the scalability of OTC derivatives processing.  27% of interest rate derivative processing cost is in manual confirmation processing, 24% in equity derivatives, 19% in credit derivatives.  Only 10% of the cost-per-trade is in “adding value”, i.e. helping customers with valuation, collateral management or other relationship matters.  While average derivatives trade volume per investment bank has trebled, from roughly 30,000 trades per year to 90,000 trades per year, the average cost-per-trade has only moved from $250 to around $200.  This implies little scalability, i.e. in a highly efficient operation the cost-per-trade should have tumbled to around $90, but is impeded in some way.  Or people aren’t learning.

The learning curve in derivatives processing is steep, and major players are not climbing it fast enough.  A good rule of thumb is that you shouldn’t trade what you can’t settle.  ‘Buy-side’ client satisfaction is only ‘satisfactory’ to ‘good’, virtually never excellent.  Z/Yen Group calculates an OpRisk Safety Estimator.  The OpRisk Safety Estimator is the R2 (R-squared) value for the ‘economy of scale’ curve, a logarithmic line of best-fit for cost-per-trade versus transaction volume – basically, “how tightly do industry participants fit an economy of scale curve for a product”. The Estimator is derived from operations costs only, trying to represent the headcount costs for the core trade processing lifecycle which generally (each product has obvious differences) includes pre-settlement/matching/static data, confirmations processing, settlement, customer relationship management, management & administration.  The OpRisk Safety Estimator excludes IT costs as the lifecycle apportionment would distort the figures.  The table below sets out the 2004 and 2005 (last year with results) figures, where 1.00 is very safe and 0.00 indicates high OpRisk.


OpRisk Safety Estimators for Selected Products, 2004 and 2005


(1=safe, 0 = risky)

OpRisk Safety Estimator


OpRisk Safety Estimator



European Cash Equities




European Repo




European Bonds




US Cash Equities




Global FX




Global Currency Options




European Stock Lending




Global Credit Derivatives




Global IR  Derivatives




Global Equity Derivatives




[Source: Z/Yen Group Limited]

It is worrying to see that while equities and FX products are improving, in some cases dramatically, all derivatives products have been getting worse.  Yes, volumes have increased, but over the same period US cash equity volumes rocketed and simultaneously posted a 23% improvement to their OpRisk Safety Estimator.  One might expect significant improvement to derivative OpRisk figures given all the 2006 and 2007 activity, but one can’t be complacent.  While there have clearly been significant improvements in derivatives processing, other traded markets have improved more.

Some participants look to better management via techniques such as Six Sigma programmes (3.4 defects per million operations). As deployed in comparable industries such as mobile telephony or airlines or credit card processing, Five Sigma confidence levels (320 defects per million operations) are approached.  Derivatives processing rarely gets close to Three Sigma (66,800 defects per million). Improvement requires fundamental change, not slightly better management.

Bad, Bad Data

But why is there backlog at all? Sure, OTC derivatives markets are difficult to confirm and settle; there are no exchanges; special terms and conditions proliferate.  The fundamental cause of most confirmation problems is bad data.  ‘Bad data’ is inaccurate, incomplete, late or inaccessible data that causes matching or processing problems.  A common problem is providing inaccurate counterparty entries, e.g. XYZ Asset Management Bermuda instead of XYZ Asset Management Bahamas.  Bad data also encompasses redundant data that conflicts with other data fields.  And things are getting worse.  Data to compute fee calculations is now part of settling some products.  Fee field computational data can raise timing issues, such as do we both agree on which LIBOR moment applies to the fee? Or model issues, such as do we calculate moving averages in the same way? These computations lead to more confirmation and settlement problems.

Bad data leads to significant costs.  There are the direct costs of mistakes and interest.  Clients are dissatisfied with service.  Operational risks are higher than they could be, in capital charges as well as losses.  Venue and execution decisions are suboptimal.  Poor investment decisions are made.  Liquidity is lower than it could be.  Finally, concerns over processing leading to more regulatory oversight and more cost.  Moreover, there is an enormous opportunity cost - markets are smaller than they might otherwise be.

What can be done about bad data? A common recommendation to most process problems is ‘simplify, automate, integrate’. The focus on “master confirmation agreements” has helped to simplify things by focusing on a more limited number of items for matching and agreement.  Automation has helped, but significant pockets of chaos remain, and automating chaos is not a solution.  One significant pocket is simple counterparty identification.  The static master data on counterparties needs a ‘single ID’. Some suggestions include voice identification to confirm the counterparty, or matched ID pairs for deals in addition to separate entry of each counterparty.  Integration is helped by things such as DTCC’s Trade Information Warehouse, but there is a limit to integration in a peer-to-peer market without an exchange.  And OTC is, by definition, hard to turn into an exchange.

Derivatives Processing Superhighway

Two routes to improvement do stand out – more informed use of technology and client involvement in reducing client input errors.  There has been a lot of talk about new technology in derivatives processing – componentization, low latency, data management and customer-centric systems.  Less common is talk of dynamic anomaly & pattern response – systems for eliminating data entry errors and spotting anomalies before they cause problems.  There is a crying need for adaptive and evolving computer systems that can handle situations too complicated for rules, such as processing partially complete derivatives.  If derivative markets are to grow as many observers expect, if dark pools are to provide many more trading opportunities, if algorithmic trading continues to grow, then back office systems will need to become vastly more sophisticated or they will prevent market growth.  Back offices need to adopt front office techniques, moving from top-down, rule-based systems where humans spend too much time processing exceptions, to dynamic and adaptive systems where humans process fewer exceptions because machines made some sensible decisions.

Clients are the source of most bad data problems, as well as the victims.  Client selection is already apparent – buy-side clients that produce processing problems are less welcome or given disadvantageous rates.  Clients could be given better data entry tools that reduce errors.  In addition to more informed use of technology and client involvement, one can expect to see some market solutions.  One creative market solution could be data ‘insurance’; some party provides a central data registry for a cost, but the cost includes indemnity and fines when the registry is in error.  Another might be structuring fees against successful settlement.  Yet another might be the sell-side moving more into full service for the buy-side and taking on some of the operational risk and cost.  MiFID and RegNMS already indicate that the sell-side might provide more compliance and routing services to buy-side clients, why not clearer responsibility for settlement risk?

Janet Wynn of DTCC points out that the financial services industry has built some great derivative processing highways but left many problems in the on-ramps and off-ramps.  Perhaps there need to be better standards for licensing firms and drivers of derivatives, audited processing standards such as those used for connecting to the credit card networks.  From bad credit rating data to bad deal entry data, the industry hurts.  It’s in the industry’s own interest to fix derivatives processing.  Bad data costs.

Further Reading

Michael Mainelli, "Toward a Prime Metric: Operational Risk Measurement and Activity-Based Costing" , Operational Risk (A Special Edition of The RMA Journal), pages 34-40, The Risk Management Association (May 2004).

A good source for statistics on derivatives processing is Markit Metrics, where the industry releases aggregate metrics to the public -


My thanks to my colleague of many years, Jeremy Smith, Head of Z/Yen Limited, now part of McLagan Partners, for providing insight and information.  I would also like to thank the Journal of Financial Transformation and Capco for hosting an evening discussion in Amsterdam on 10 October 2007 with Janet Wynn and Adriaan Hendrikse, hosted by Michael Enthoven, that helped me develop my thinking.

Professor Michael Mainelli, PhD FCCA FSI, originally undertook aerospace and computing research, followed by seven years as a partner in a large international accountancy practice before a spell as Corporate Development Director of Europe’s largest R&D organisation, the UK’s Defence Evaluation and Research Agency, and becoming a director of Z/Yen (This email address is being protected from spambots. You need JavaScript enabled to view it.). Michael is Mercers’ School Memorial Professor of Commerce at Gresham College (

Z/Yen operates as a commercial think-tank that asks, solves and acts on strategy, finance, systems, marketing and intelligence projects in a wide variety of fields (, such as developing an award-winning risk/reward prediction engine, helping a global charity win a good governance award or benchmarking transaction costs across global investment banks.  Z/Yen’s humorous risk/reward management novel, Clean Business Cuisine: Now and Z/Yen, was published in 2000; it was a Sunday Times Book of the Week; Accountancy Age described it as “surprisingly funny considering it is written by a couple of accountants”.

Z/Yen Group Limited, 5-7 St Helen’s Place, London EC3A 6AU, United Kingdom; tel: +44 (0) 207-562-9562.