Wiki Wiki Web

Spectral Analysis of the Covariance

Motivation

We follow up on the Efficient Frontier Math where we showed the relationship between the optimal markowitz weights and the inverse of the asset returns covariance matrix $\Omega$.

The historical estimation of the covariance $\Omega$ with $T$ returns produces a matrix of rank at most $min(T,n)$. For instance, 30 year of monthly returns will get a matrix of rank at most 360, so that in the case of 500 stocks, $\Omega$ will not be invertible. In this case, there are many 0 variance portfolios with non 0 returns.

Even if in the case $T>n$, the covariance matrix $\Omega$ has $\frac{1}{2} n(n+1)=O(n^2)$ distinct terms. How likely are we to correctly estimate $O(n^2)$ terms out of $T$ observations unless we severely restrict $n$?

Random matrix theory indicates that even in the case where $T>n$, the distribution of eigenvalues near 0 can be predicted given the asymptotics of $n$ and $T$. In turn, we see that the eigenvectors that are the most subject to noise are most amplified by the markowitz solution.

In this section, we explicit the relationship between eigenvectors and optimal portfolio weights in order to understand at a deeper level why they are unstable when a large number of assets is chosen..

Eigenvalue Decomposition of the Covariance

As the covariance matrix $\Omega$ is positive semi definite, it can be decomposed in the form: $$\Omega = Q D Q^t$$ where $D$ are diagonal and $Q$ are orthogonal $Q^t = Q^{-1}$.

Minimum Variance Weights and Eigenvalues

We define the eigenvectors $e_i = [q_{ij}]^t$ where $\Omega e_i = d_{ii} e_i$. A portfolio with weights $e_i$ has variance $\sigma(e_i)^2 = d_i$.

A portfolio with weights ${\alpha_i}$ in the base ${e_i}$ has variance: $$\sigma(\sum_i \alpha_i e_i)^2 = \sum_i \alpha_i^2 d_i$$

As the initial problem of minimising $w^t \Omega w$ subject to $\sum_i w_i = 1$, the problem in the eigenvalue base is that of minimising $\sigma(\sum_i \alpha_i e_i)^2$ subject to $$\sum_i \alpha_i \sum_j q_{ij} = 1$$ This can be solved brute force with weights $\alpha_G = \frac{\Delta^{-1} u}{u^t \Delta^{-1} u}$ where the diagonal operator $D$ is rescaled into the diagonal operator $\Delta$ such that: $$\Delta_{ij} = \delta_{ij} \frac{d_{ii}}{(\sum_j q_{ij})^2}$$

Rescaled Eigenvalues

To solve this more intuitively and see what weight is assigned to each eigenvalue, we can introduce the rescaled eigenvectors $\epsilon_i$ defined by: $$\epsilon_i = \frac{e_i}{\sum_j q_{ij}}$$

Note that $\sum_j q_{ij}^2=1$ as the eigenvectors are $L^2$ normed, but as components can be either sign, the sum of components $\sum_j q_{ij}$ can be 0 or very near 0 (in which case this eigenvector is probably the optimal weight).

The problem we minimise can be more simply expressed in this basis: $$\min \sum_i \alpha_i^2 \Delta_{ii}$$ subject to $\sum_i \alpha_i = 1$

the solution to this is known to be $$ \alpha_i = \frac{\Delta_i^{-1} }{ \sum_i \Delta_i^{-1}} $$ and the optimal portfolio, expressed in the rescaled principal components $\epsilon_i$ will have the largest weight for the components with $\Delta_i^{-1}$ nearest to 0.

Next: techniques for robustifying the optimal portfolio