July 2007


In their second article on risk management, Noël Amenc and Lionel Martellini look at the diversification benefits of optimal beta management in a core portfolio Strategic allocation is the first step in the investment management process. This allocation involves choosing the portfolio’s composition over a long period between the different asset classes, in accordance with the investor’s objectives. Today, asset allocation is tending to play a greater role in the investment management process. This interest in asset allocation can be explained by the results established by various empirical studies, which suggest that this step can contribute significantly to the result of the portfolio. Brinson, Hood and Beebower (1986) and Brinson, Singer and Beebower (1991) have shown, for example, that a considerable share (90%) of a portfolio’s performance can be attributed to the initial allocation decision. Contrary to a common misperception, managing the core portfolio does not necessarily imply passive investment in a commercial index. It consists instead of using state-of-the-art asset allocation techniques to design an optimal benchmark based on investors’ preferences and constraints, in particular liability constraints. Strategic allocation was formalised by the seminal work of Markowitz (1952), who was the first to quantify the link between the risk and return of a portfolio, and thereby introduced modern portfolio theory. Markowitz developed a theory of portfolio choice in an uncertain future based on a quantification of the difference between the risk of a portfolio’s assets taken individually and the global portfolio risk. Markowitz’s theory is based on maximising the utility of final wealth for a risk-averse investor, who measures risk through the variability of asset returns (volatility). Optimal portfolios, from a rational investor’s point of view, are defined as the portfolios with the lowest level of risk for a given return, or alternatively, the portfolios with the highest return for a given level of risk. These portfolios are said to be efficient in the mean-variance sense. The portfolio selection method developed by Markowitz therefore involves obtaining an optimal portfolio as a function of first order (expected return) and second order (variance and covariance) moment estimations of the returns of the asset classes being considered. The quality of the estimation of these parameters is all the more decisive in that it has been shown that the portfolio optimisation programme is characterised by very significant dependence on the initial conditions: minor discrepancies in the estimation of the parameters lead to very significant changes in the optimal allocation. This problem has been shown to be particularly acute in the case of errors in expected return estimation (see Chopra and Ziemba, 1993). As a result of the lack of robustness in Markowitz-efficient frontier analysis, a suggestion has been to focus on the only portfolio for which expected return estimates are not needed, ie, the minimum variance portfolio. The key challenge then is to use a robust methodology for an enhanced estimation of the asset returns variance-covariance matrix, a problem that has been extensively studied in the literature. We provide in what follows an overview of the main findings on how to mitigate the sample risk problem in the estimation of the second-order moments and co-moments of asset return distribution. Solutions to covariance
Several solutions to the problem of asset return covariance matrix estimation have been suggested in the traditional investment literature. The most common estimator of the return covariance matrix is the sample covariance matrix of historical returns. 07_07_edhec_1
Where T is the sample size, Ht is a Nx1 vector of hedge fund returns in period t, N is the number of assets in the portfolio, and
is the average of these return vectors. We denote by Sij the (i,j) entry of S. A problem with this estimator is typically that a covariance matrix may have too many parameters for to the available data. If the number of assets in the portfolio is N, there are indeed N(N-1)/2 different covariance terms to be estimated. The problem is particularly acute in the context of alternative investment strategies, even when a limited set of funds or indices is considered, because hedge fund returns — available only on a monthly basis — provide insufficiently frequent data. One possible cure for the curse of dimensionality in covariance matrix estimation is to impose some structure on the covariance matrix to reduce the number of parameters to be estimated. In the case of asset returns, a low-dimensional linear factor structure seems natural and consistent with standard asset pricing theory, as linear multi-factor models can be economically justified through equilibrium arguments (cf. Merton’s Intertemporal Capital Asset Pricing Model (1973)) or arbitrage arguments (cf. Ross’s Arbitrage Pricing Theory (1976)). Therefore, in what follows, we shall focus on K-factor models with uncorrelated residuals. Of course, this leaves two very important questions: How much structure should be imposed? (The fewer the factors, the stronger the structure.) And what factors should be used? There is a standard trade-off between model risk and estimation risk. The following options are available:
  • Impose no structure. This choice involves low specification error and high sampling error, and led to the use of the sample covariance matrix.
  • Impose some structure. This choice involves high specification error and low sampling error. Several models fall within this category, including the constant correlation approach (Elton and Gruber, 1973), the single factor forecast (Sharpe, 1963) and the multi-factor forecast (eg, Chan, Karceski and Lakonishok, 1999).
  • Impose optimal structure. This choice involves medium specification error and medium sampling error. The optimal trade-off between specification error and sampling error has led to an optimal shrinkage towards the grand mean (Jorion, 1985, 1986), to an optimal shrinkage towards the single-factor model (Ledoit 1999), or to the introduction of portfolio constraints (Jagannathan and Ma 2003).
One other alternative is to consider an implicit factor model in an attempt to mitigate model risk and impose endogenous structure. The advantage of that option is that it involves low specification error (because of its “let-the-data-talk” approach) and low sampling error (because some structure is imposed). Implicit multi-factor forecasts of asset return covariance matrix can be further improved by noise-dressing techniques and optimal selection of the relevant number of factors (see below). More specifically, we use Principle Component Analysis (PCA) to extract a set of implicit factors. The PCA of a time-series involves studying the correlation matrix of successive shocks. Its purpose is to explain the behaviour of observed variables using a smaller set of unobserved implied variables. Since principal components are chosen solely for their ability to explain risk, a given number of implicit factors always captures a larger part of asset return variance-covariance than the same number of explicit factors. One drawback is that implicit factors do not have a direct economic interpretation (except for the first factor, which is typically highly correlated with the market index). Principal component analysis has been used in the empirical asset pricing literature (see, among others, Litterman and Scheinkman, 1991; Connor and Korajczyk, 1993; or Fedrigo, Marsh and Pfleiderer, 1996). From a mathematical standpoint, the process involves transforming a set of N correlated variables into a set of orthogonal variables, or implicit factors, which reproduces the original information present in the correlation structure. Each implicit factor is defined as a linear combination of original variables. Define H as the following matrix:
We have N variables hi, i=1,...,N, ie, monthly returns for N different hedge fund indices, and T observations of these variables.  PCA enables us to decompose htk as follows
  • U is the matrix of the N eigenvectors of H’H
  • V is the matrix of the N eigenvectors of HH’.
Note that these N eigenvectors are orthonormal.λi is the eigenvalue (ordered by degree of magnitude) corresponding to the eigenvector Ui. Note that the N factors Vi are a set of orthogonal variables. The main challenge is to describe each variable as a linear function of a reduced number of factors. To that end, one needs to select a number of factors K such that the first K factors capture a large fraction of asset return variance, while the remaining fraction can be regarded as statistical noise:
07_07_edhec_5   where some structure is imposed by assuming that the residualsεtk are uncorrelated to one another. The percentage of variance explained by the first K factors is given by:
A sophisticated test by Connor and Corajczyk (1993) finds between four and seven factors for the NYSE and AMEX over 1967-1991, a finding roughly consistent with Roll and Ross’s (1980). Ledoit (1999) uses a five-factor model. In this paper, we select the relevant number of factors by applying some explicit results from the theory of random matrices (see Marchenko and Pastur, 1967).  The idea is to compare the properties of an empirical covariance matrix (or equivalently, a correlation matrix, since asset returns have been normalised to have zero mean and unit variance) to a null hypothesis purely random matrix as one could obtain from a finite time-series of strictly independent assets. It has been shown (for a recent reference, see Johnstone 2001; for an application to finance, see Laloux et al. 1999) that the asymptotic density of eigenvalues of the correlation matrix of strictly independent assets reads:
Theoretically speaking, this result can be exploited to provide formal testing of the assumption that a given factor represents information and not noise. However, the result is an asymptotic result that cannot be taken at face value for a finite sample size. One of the most important features here is the fact that the lower bound of the spectrum λ is strictly positive (except for T=N), and, therefore, there are no eigenvalues between 0 and λ. We use a conservative interpretation of this result to design a systematic decision rule and decide to regard as statistical noise all factors associated with an eigenvalue lower than λ. In other words, we take K such that λ and λK+1 < λm, where λ is the greatest eigenvalue. A problem of a different nature comes from the non-stationarity of the data. Numerous empirical studies have highlighted, for example, the fact that the volatilities of asset classes are not constant over time and that non-stability would reduce the robustness of an optimisation where the risk parameters are set equal to their past values. The dynamic character of the parameters renders the task of estimation more arduous, a challenge that can be addressed by the use of suitably designed statistical models such as Garch models. Good modelling brings robustness back to portfolio optimisation over a long period by relying no longer on the stability of the risk parameters themselves, but on the stability of the models that define the variation in the risk parameters (variance-covariance). While it seems that there are many techniques for a better estimation of the variance-covariance matrix of asset returns, a major challenge remains: that of estimating the mean returns. It is for this reason that, as mentioned in the introduction, the recommended focus has often been on minimum-risk portfolios, whose derivation does not depend on any estimate of expected returns. 07_07_edhec_logo• Noël Amenc is director of EDHEC Risk and Asset Management Research Centre, and Lionel Martellini is scientific director This article is based on research included in the EDHEC publication 'The Impact of IFRS and Solvency II on Asset-Liability Management and Asset Management of Insurance Companies'

Executive Interviews

INTERVIEW: Put your money where your mouth is

Jun 10, 2016

At Kempen Capital Management, they believe portfolio managers should invest in their own funds. David Stevenson talks to Lars Dijkstra, CIO of the €42 billion manager.

EXECUTIVE INTERVIEW: ‘Volatility is the name of the game’

May 13, 2016

Axa Investment Managers chief executive officer, Andrea Rossi, talks to David Stevenson about bringing all his firm’s subsidiaries under one name and the opportunities that a difficult market...


ROUNDTABLE: Beyond the hype

Oct 13, 2016

The use of smart beta investing continues to grow. Our panel, made up of both providers and users, discusses what the strategy actually means, how it should be used and the kind of pitfalls that may arise when using this innovative investment technique.

MIFID II ROUNDTABLE: Following the direction of travel

Sep 07, 2016

Fund management firms Aberdeen and HSBC Global meet with specialist providers to speak about how the industry is evolving towards MiFID II.