Aymeric Kalife, CEO, iDigital Partners & Associate Professor, Paris Dauphine University
The asset management industry has been facing four main challenges since the 2008 financial crisis, regarding not only alpha generation but also risk mitigation:
These challenges have significantly penalised active management over the past decade, whether based on:
Such as the sector rotation strategies as diversification is dying: 75% cross-correlation of stocks within S&P500 as it is dominated by a few stocks (five techs = 20% of the performance and 50% when combined with the consumer stocks), or value strategies which have been systematically destroyed by debt-induced corporate buybacks and central banks.
Such as drawdown VIX futures strategies due to significant cost of carry since 2014, except for times of heightened volatility and dispersion; those tail risk derivatives strategies suffer from
Such limitations have initially favoured low fees passive management (e.g. managed volatility funds, CTA, or ARP) that aim to deliver a portfolio with a stable level of volatility in all market environments by systematically reducing an allocation to securities with lower expected risk-adjusted return in exchange for securities with a higher level of expected risk-adjusted return. This relies on the ability to forecast the near term risk by taking advantage of the usually observed
As a result, these passive managed volatility strategies consistently contain volatility within a much tighter range while reducing outsized drawdowns in extreme market conditions, improving risk adjusted returns by tangible 20-30% for holding periods up to a quarter, enhancing portfolio skewness and achieving robust tail-risk reduction.
Equity volatility spikes combined with increasingly correlated markets have made most passive management inefficient, especially since 2013 when a new volatility regime pattern emerged.
As illustrated by -40% for equity alternative risk premia during the Covid-19 crash, many passive management strategies performed poorly. They suffer from key empirical weaknesses:
1 They do not meet all investors’ risk appetite and may incur significant transaction costs, due to their dynamic adjustment of equity exposures and high tracking error to a market benchmark for some periods of time.
2 They experience under-risking and over-risking patterns. Even though the long-term realised volatility tracks the target volatility (thanks to autocorrelation of daily returns), short term volatility may oscillate widely and be significantly away from the target (implying potential under-risking during uptrend and over-risking during downturn).
This stems from the lack of accuracy of the widely used exponentially weighted moving average volatility estimators (based on i.i.d. returns with zero-mean assumptions), which notably lacks a robust inverse relationships with equity drawdowns beyond daily time horizons, and is too sensitive to drawdowns outliers and to volatility modelling (whether constant/stochastic/jumps).
This can even translate into overly aggressive equity de-allocation (August 2011, June 2012, February 2018, March 2020; in August 2015 with $50bn equities sold while $25bn in June 2016 and $150-$200bn in February 2018) and slow re-equitisation which could take many weeks, if not months, e.g. in 2019 with single digit only performance vs. +29% S&P500).
3 The correlation between bonds and equities may be positive (e.g. March 2020, 1995-2002 or 2006-2007, 2013, 2015, 2016, 2018 – since 1885 bonds and equities moved opposite only 11% of the time while 30% of the time in tandem).
4 They contribute to feedback loops that exacerbate both selloffs (January and June 2016, February and October-December 2018, March 2020) and rallies. Actually, they sell equities when volatility is rising and buy equities when volatility is falling. It’s very common for them to sell within a day of the emergence of elevated volatility, which can increase further market volatility because so much money is moving out of equities.
As a result, neither active nor passive styles seem to have significantly and sustainably captured velocity changes over the past decade. Going beyond the passive vs. active asset management paradigm through a mix between fundamentals and systematic quantitative investment strategies may be appropriate to help capture velocity changes.
Mixing the two approaches is likely to improve the P&L, where fundamental approaches notably select assets based on KPIs and macro drivers, while systematic ones detect technical patterns and industrialise mitigating actions.
Given their structural weaknesses, capturing velocity changes requires embedding some fundamentals within systematic algorithmic passive asset management. That way, they do not overreact following initial markets fall, but only consistently with sound economic concerns. You can introduce fundamentals the following ways:
For example economic growth (e.g. persistent low real rates and inflation indicative of low growth thus low cost of capital and low volatility), M&A activity, share buybacks (bearing downward pressure on volatility), central banks policies (e.g. quantitative easing and increasing balance sheets depressing volatility).
This provides anticipative signals regarding the behaviour of future near term volatility. For instance, a short-dated implied volatility is indicative of a shift from low to high volatility regimes; or a high volatility risk premium can encourage investors to seek volatility short selling opportunities, thus lower volatility.
For example, when market dealers hedge gamma positions and their increasingly massive use of weekly options (30-50% of all options vs. < 5% in 2011). (Positive net long equity gamma positions dampen volatility, while net short equity gamma positions tend to exacerbate volatility.)
These mostly drive the asset rebalancing and are volatile and path dependent. Technically, the P&L is the product of the spread between the expected and realised volatility times the net asset vs. liability gamma. As a result, a sustainable “break-even” volatility target is such that the average asset-liability gamma-theta P&L is zero over the time horizon.
Contrary to conventional wisdom, less volatile stocks empirically tend to outperform over the long term by losing significantly less during drawdowns. In contrast, volatile stocks have to work much harder to first restore the value lost during periods of decline and then to grow. This historical performance of low-risk stocks defies the central paradigm of traditional finance theory which states that lower risk goes with lower returns, stemming from:
As a result, a sound mix of fundamentals and systematic technicals within the asset selection and the asset allocation process that combines “volatility control” technicals with high fundamental quality stocks selection (healthy and stable profitability, strong free cash flows, low debt and shareholder-friendly practices, above average dividend payout, low net equity issuance) can produce stronger performance. This has proved to be a more efficient way since the 2008 crisis, not only to mitigate significant declines but also to generate significantly higher returns at similar levels of risks.
The portfolio may drift from some target asset allocation in case of velocity changes, providing risk/return characteristics that may be inconsistent with an investor’s goals and preferences (e.g. 60/40 Equity / bonds target). As a result, keeping tight tracking error, while minimising transaction costs may become the primary objective.
Since rebalancing costs are linear while rebalancing benefits are quadratic, at some trigger point the benefits of rebalancing will begin to outweigh the costs, and the net benefit of rebalancing will become positive. Optimised rebalancing (halfway, or target boundary) significantly outperforms the periodic rebalancing regarding the fund value by a factor >2, and even more (up to a factor of 10) when scaled with the volatility of the fund.
Similarly, rule-based hedging strategies help capture velocity changes through an optimised trade-off between low tracking errors vs. low costs.
Rebalancing only at discrete time intervals reduces the total transaction costs, while leading to a hedging error.
Regarding mean-variance P&L, the best trade-off lies either in time-based rule-based or move-based rule-based strategies, i.e. whenever change in asset/ delta > bandwidth, and depending on whether unit transaction costs are small (systematic weekly rebalancing, combined with daily emergency thresholds based on variable bandwidth delta tolerance) or large (gamma bandwidth delta tolerance, or asset tolerance rebalancing, or fixed bandwidth delta tolerance).
Rule-based long/short dynamic allocation strategies enable to cheapen the cost of carry, by monetising some yield participating in upward trending markets, such as:
This reduces the cost of carry due to the implied to realised volatility risk premium and associated market timing issues, by systematically rolling VIX futures contracts to get a constant monthly forward-starting variance.It also partially mitigates the cost of carry due to the theta time decay, through opportunistic VIX futures overlays whenever the VIX term structure is upward sloping – consistent with not distressed equities.
With respect to the three value propositions above, digital technologies can be used for: