Google

Agile Risk Assessment Depends on Ad Hoc Analysis

Written on:March 23, 2011
Comments are closed

Minder Cheng, Investment Technology Group, Thomas Chippas, Barclays Capital, and Barry Zane, ParAccel at Structure Big Data 2011In a panel moderated by Mohr Davidson General Partner Jim Smith at Structure Big Data on Wednesday, three panelists described the complexities of dealing with the huge amount of data that come with the financial risks in banking.

Thomas Chippas, managing director at Barclays Capital, notes that there is no standard definition of what risk means in the financial services industry. For a true picture, you have to query data in real-time across institutions, including terabytes of your own data as well as data extended into other databases to assess the full risk. Back in October 2008, a federated group of datastores existed, the risks might have been more apparent. That type of federated data access still doesn’t exist today. What needs to come next is a way to put that data together in a useful way.

The complexity of the necessary analytics task is daunting, because there are multiple correct answers, and no uniform standard for structuring data. There are at least 40 different schemes considered appropriate when it comes to how trading information is provided.

Zane said that what’s been driving the development of new technology has been a combination of economics, regulation, and businesses looking to get ahead. The industry is now at the point of same-day need for analysis. More data, but the timeline is shrinking. Companies want to get to same-minute, as close as possible after trading, activity — requiring massively parallel computing.

Currently, the financial industry has a great deal of historical data, but with social media comes the addition of instant news, telling traders how to shift things around to be profitable — or even survive.

Most risk analysis involves continuously running models to assess risk, as well as ad hoc querying where you can’t predict its nature, like the events in Japan: an actual event that goes beyond the models built to assess possible risks.

Minder Cheng, board member of Investment Technology Group, notes that in the wake of the events of 2008, TARP had to be introduced because no one was going to support another $ 750 billion bailout. If there had been a chance to somehow look at the exposure across all the banks and see what could happen with the market, alarms might have gone off alerting them that billions were at risk.

This type of analysis can be both offensive and defensive, but risk management is never about being on the defense; it’s about letting CEOs make decisions regarding risk-adjusted returns. Every investment decision is made by evaluating risk vs. return trade-off. Risk management is both a data exercise and analytics exercise.

Every simulation should be saved so all possible situations can be bumped up to assess the biggest risks. If you save everything from one day’s simulations, double-digit terabytes of data are saved merely during the course of one day.

Chippas noted all these terabytes are generated before the trading even takes place. With today’s technology, businesses have to make a choice between speed and risk analysis. Everyone has to recognize the limitations and associated constraints that result.

Zane said that where companies formerly handled thing in-house, as the volumes grow and analytics more severe on more data, the transition isn’t so much away from traditional structures as away from proprietary solutions. The shift allows more ad hoc analytics and more agility, as the analysis is no longer done by hiring programmers, but by using standardized languages and APIs.

Cheng stated that technology is being driven both by regulators (they don’t want to bail out again) as well as banks wanting to maximize risk-adjusted returns.

Chippas added that there isn’t a day that goes by that they aren’t trying to optimize: processes, technology, etc. It’s an ongoing trajectory of trying to provide a broader base of analytics, without more infrastructure if possible, stating, “It never ends. If we can do it faster, cheaper, broader, better than anyone else, that’s the goal.”

Smith closed by asking the panelists if they would use any of these applications on the cloud? Cheng answered quickly with an unequivocal “No”  due to the sensitive nature of the data, as well as local laws. China, for instance, wouldn’t allow companies to store data in the cloud; regulations require all data stay in house.

Chippas closed by adding, “You don’t want to be the guy who had the data in the cloud and lost control of it.”

Watch live streaming video from gigaombigdata at livestream.com

Related content from GigaOM Pro (subscription req’d):

  • Why iPad 2 Will Lead Consumers Into the Post-PC Era
  • The Near-Term Evolution of Social Commerce
  • Content Farms: The Players, The Benefits, The Risks


GigaOM

Sorry, the comment form is closed at this time.