Skip to main content

Value-at-Risk: one metric, a plethora of models

On 20 September 2018, Deloitte Luxembourg organized the fifth session of its 2018 Quantitative Finance Master Class series, zooming in on Value-at-Risk.

Value-at-Risk (VaR) has become the most popular measure of risk. The simple definition and interpretation of the metric made it a tool of choice for various groups of diverging stakeholders such as risk managers, regulators (c.f. Solvency II, Basel III, UCITS, PRIIPs) and board members. The clarity of the concept is contrasting with the daunting task of selecting an appropriate model for VaR estimation.

Inventorying an exhaustive list of existing VaR models would be a tedious and challenging task. An alternative approach to study and evaluate models is to assess their ability to capture financial stylized facts such as:

  • Non-normality of returns’ distribution
  • Volatility clustering and asymmetry
  • Jump components in returns

Frequently, models can be viewed as addressing in priority one of these criteria. For instance, analytical models are well designed to adapt to changing market conditions. However, this reactivity often comes at the cost of stricter assumptions. These strengths and weaknesses are well illustrated by volatility modeling based VaR.

On the contrary, simulation-based models rather favor a parsimonious approach towards assumptions with the loss of reactivity. The so-called historical simulation models share these characteristics.

More recently, more attention has been devoted to build hybrid models (also known as semi-parametric) to both address market reactivity and relax the main assumptions. For instance, the Filtered Historical Simulation model of Barone-Adesi, Giannopoulos, and Vosper (1999) proposes in a first step to model volatility, to reflect the latest market conditions, and in a second step to perform historical simulation on the resulting residuals without assuming any distribution.

As VaR is a model-based forecast, it is of paramount importance to continuously assess the capacity of the model to make accurate predictions. In the context of quantile forecasting, the most natural back-test is to verify whether the empirical rate of exceptions is aligned with the confidence level of the VaR. The so-called Kupiec test provides a statistical framework to determine whether any deviation is significant and is not resulting from sample randomness. In addition to frequency, independence tests are now part of the back-testing routines implemented in several management companies.

Classical VaR back-testing methodologies evaluate models individually but does not provide a basis for comparison. We have observed an increasing interest in performing statistical horse races across models in order to segregate the outperforming and underperforming ones. This exercise is of particular interest when a management company study the possibility to adopt a new risk solution provider.

Modern back-testing technics have been developed to answer this need. Chen and Gerlach (2007) propose to compare the performance of VaR models through a loss function accounting for both the exception rate and the exceptions’ magnitude. The model minimizing the loss function, on a given portfolio returns data set, is ‘the best’. Furthermore, in order to verify that a model generally lead to the smallest loss function and that the result obtained is not only sample specific, the loss function distribution can be obtained through a numerical procedure called bootstrapping.

We believe that the latest developments in the field of VaR forecasting and back-testing create very valuable opportunities for our industry.

To learn more about VaR, check out the Deloitte Luxembourg Quantitative Master Classes. Information on the 2019 season is available now.

Please refer to our dedicated page for more information on value-at-risk model validation.

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey