FIN206 Topic 2: Portfolio management theory revisited

Ace your homework & exams now with Quizwiz!

Identification of factors

Academic studies using other empirical methods have suggested the following factors are key influences on asset prices: • unanticipated changes in inflation • unanticipated changes in GDP/industrial production • unanticipated changes in the spread between low-yield and high-yield bonds • unanticipated changes in the slope of the yield curve (i.e. the differences in yield between 3-year and 10-year bond rates). Unfortunately, empirical studies using APT have done very little to shed light on what factors are most important in affecting security prices. This is mainly attributable to the mathematical techniques of factor analysis rather than any problems with the APT model itself. Factor analysis is an extremely complicated, almost esoteric, mathematical algorithm that will statistically extract factors from a set of data. Interpreting these factors is nearly impossible because they are contrived combinations of statistical returns from the data set. Additionally, after the first factor is determined, the data set is mathematically manipulated so any remaining factors that are found can only ever have complicated counter-intuitive interpretations. For these reasons, APT has proven, in reality, to be of very little use to the portfolio management industry.

Non-market risk exposure

Concepts developed from CAPM enable the computation of how much non-market (diversifiable) risk a fund is exposed to. Recognising that this risk could be diversified away by a different selection of stocks and/or different weightings of existing stocks, managers may nevertheless take a view and choose to be so exposed to some diversifiable/non-systematic risks. This may occur if the active managers believed they were good at identifying undervalued stocks and selected their portfolio on this basis. The fund would only be rewarded for bearing this non-market risk if the stocks were undervalued and then actually did outperform. Therefore, managers are trading off the ability to identify undervalued stocks and exposure to non-market risk, or take active risk.

Price-to-earnings ratio

One of the early studies that contradicted the predictions of CAPM was conducted by Basu (1977) and analysed the performance of stocks over the period April 1957 to March 1971. His finding was that stocks with low price-to-earnings (PE) ratios earned significantly higher returns than stocks with high PE ratios. This finding could not be explained by the framework provided by CAPM. Later studies on low PE stocks found that this characteristic was not just confined to small cap stocks. Nor, as had been suggested, was it related to the 'January effect'.

Alpha

The intercept of the regression line is usually called 'alpha'. It is that portion of the return of a security or portfolio that cannot be explained by a change in the market. It can be positive or negative and if portfolio managers can predict its future value and buy when it is estimated to be positive, then they can achieve superior performance. Over the observed two years, News Corp had an alpha of -0.5%. This means its excess return was negative over the period.

Ex ante and ex post tracking error

A realized (also known as "ex post") tracking error is calculated using historical returns. A tracking error whose calculations are based on some forecasting model is called an "ex ante" tracking error. Low errors indicate that the performance of the portfolio is close to the performance of the benchmark.

Efficient market hypothesis (EMH)

According to the efficient market hypothesis (EMH), market prices perfectly and continuously reflect all the publicly available information concerning a security. Thus the prices of traded securities are efficient, in the sense that there do not exist arbitrage opportunities that would allow an investor to gain abnormal returns. It does this by assuming that there are a large number of market participants whose sole objective is to maximum profits and they are each independently analysing and valuing securities. The theory also assumes that any new information is rapidly reflected and incorporated into share price movements. It also assumes that this new information arrives randomly (news announcements are independent). There are three major versions of the EMH: weak, semi-strong, and strong. • the weak-form EMH claims that prices on traded assets already reflect all past publicly available information • the semi-strong-form EMH claims both that prices reflect all publicly available information and that prices instantly change to reflect new public information • the strong-form EMH additionally claims that prices instantly reflect even hidden or 'insider' information.

Alpha vs beta investors

Both alpha investors (active managers) and beta investors (passive investors) can be very passionate about defending their chosen investment style. Alpha investors contend that many people can and do beat the indices quite regularly. However many of the studies quoted are before fees. After fees, the evidence is mixed. That said such managers are not interested in simply matching the index, and they do not like the idea of losing money. They believe through active analysis they can successfully identify and therefore avoid weaker, underperforming stocks and conversely allocate capital to those expected to be stronger. Beta investors believe that the broad stock market indices will be positive over time, so they are comfortable gradually adding to their positions and sitting tight for the long term. They believe that negative market returns for a short period of time are part of the cost of investing and are confident they will be ahead after fees in the long term by staying fully invested in the total market.

Interpreting the CML

By extending the efficient frontier to include the risk-free asset, a higher frontier is achieved (i.e. it is now possible to earn a higher rate of return for a given level of risk - as shown by the CML sitting above the efficient frontier). All points on the CML are superior to the underlying efficient frontier and is achieved by combining the risk-free asset with the market portfolio of risky assets. Interpreting the CML in the graph above means that: • portfolios along the blue CML line combine both the risk-free asset and the market portfolio in some ratio, with weights adding to 100% • portfolios on the CML below the market portfolio hold the risk-free asset, with those close to the vertical axis holding a higher proportion of the risk-free asset (e.g. 70% in the risk-free asset and 30% in M) • portfolios on the CML above the market portfolio are short the risk-free asset (borrowed to invest more in the market portfolio), with those further to the right borrowing a higher proportion of the risk-free asset (e.g. -20% in the risk-free asset and 120% in M) • portfolios on the CML at the market portfolio have no exposure to the risk-free asset (e.g. 0% in the risk-free asset and 100% in M) • portfolios on the CML at the vertical axis have no exposure to the market portfolio (e.g. 100% in the risk-free asset and 0% in M). It is assumed that most rational investors are risk-averse, with the majority of investors holding some level of the risk-free asset between 0% and 100% (i.e. holding a portfolio on the CML to the left of the market portfolio). Those with more aggressive attitudes to investment risk might hold little cash or negative cash (borrow/gear to invest). In terms of estimating the risk free rate, no investment is totally risk-free, however, for an Australian investor, Australian Government bonds come close and are typically used as the risk free proxy. Although T-bills are often cited as being closest to the ideal risk-free asset for their short term expiry and low interest rate risk, they do have reinvestment risk. For the risky asset, many investors choose a mutual fund or an exchange-traded fund based on a market index to proxy the market portfolio which provides diversification in the risky asset without the need for individual security analysis. Note that the market, in this context, is the total of all securities that exist anywhere and that CAPM claims that this universal market set is an efficient set.

Correlation coefficient

Correlation coefficient The correlation coefficient measures the degree to which the securities' prices move in tandem with each other and ranges from a minimum value of -1 to a maximum value of +1. A perfectly negative correlation of -1 means that the securities move exactly opposite to each other, and a perfect positive correlation of +1 means that the securities move in unison with each other. Looking at this formula it can be seen, from a portfolio manager's point of view, the most important variable in the formula is the correlation coefficient between security returns. Combining two securities with less than perfect correlation results in the risk of the portfolio being less than the weighted averages of the two securities' individual risks. Thus, it can be seen that mathematically, the whole concept of risk reduction is delivered through diversification. The risk of a portfolio declines as the correlation between the securities declines. The risk of a portfolio of two securities with perfect correlation is simply a function of the individual stock's risks and their weightings. As the correlation declines so does the portfolio risk with the minimum level of risk being achieved when the two securities have perfect negative correlation. Therefore, in constructing a portfolio, desirable securities are those with a low or negative correlation to other assets. Generally, there are no securities of value that have a perfect positive (+1.00) or negative (-1.00) correlation with other securities. Securities with a negative correlation are relatively rare compared with those with no or negligible correlation (correlation of approximately 0) and positive correlations, which are the most common. It is also important to note that, in reality, correlations will vary over time. During the global financial crisis (and other periods of 'stressed market conditions') it was seen that correlations increased significantly in many instances, which surprised portfolio managers

Consumption-orientated capital asset pricing model (CCAPM)

Extending the work in asset pricing was a paper that appeared in 1979 by Breeden, 'An intertemporal asset pricing model with stochastic consumption and investment opportunities'. This model takes consumption into consideration. The consumption-orientated CAPM (CCAPM) says that by investing in the capital markets, investors are delaying consumption. By extension, the model says that people will invest when times are good, but when times are poor, people would rather consume. Therefore, based on this 'diminishing marginal utility of consumption'; when consumption is low, those securities that have high returns will be bid up by investor demand. This diminishes the expected future return of those securities (because of their inflated prices). CCAPM is much simpler than APT and ICAPM because it has a single factor — the sensitivity of the return of the asset to aggregate consumption — and it has intuitive appeal. However, the evidence does not support the theory, because empirical tests have been unable to support its predictions.

Differences between macroeconomic factors and fundamental factors models

First, in terms of sensitivities; the standardized sensitivities (or betas) in the fundamental factor models are calculated directly from the security's actual attributes and actual data and are not estimated. This contrasts with the macroeconomic factor model, in which the beta sensitivities are regression slope estimates. The second difference is in the interpretation of factors — the macroeconomic factors are surprises in the macroeconomic variables. In contracts, the fundamental factors are rates of return associated with each factor and are estimated using multiple regression. Finally, the number of factors differs — the macroeconomic factors are intended to represent systematic risk factors, and are usually small in number (i.e. a parsimonious model). Fundamental factors are often large in number; providing a more cumbersome, yet more detailed, model of the risk-return relationship for the asset.

Limitations of MPT

From a theoretical point of view, there are three main shortcomings of Markowitz's 'modern portfolio theory': • risk — using volatility as a measure for risk may not fully explain investment risk (e.g. most investors are more concerned with downside risk). Furthermore, the assumption that the risk of an asset is constant over time is not likely to be valid • returns — the assumption that asset returns follow a normal distribution is also not likely to be valid (i.e. extreme and negative return events have a higher probability of occurring than assumed) • risk-aversion — the assumption that investors are both rational and risk-averse is not likely to be valid in all situations.

Long-term trend reversals

In 1985, DeBondt and Thaler found that stocks which had been relative losers of the previous period (three-five years) had much higher average returns over the next period (three-five years). It was shown later that these higher returns of the relative losers could not be confirmed by CAPM — that is, the observation could not be explained by theory. For CAPM to explain this observation, the volatility of the relative losers would have to be much higher than the volatility exhibited.

Book-to-market equity

In 1985, another as yet unrelated observation was made: that stocks that had a low market price compared to their book value have significantly higher returns than stocks with a high price relative to their book value. (Book value is simply defined as invested capital plus retained earnings, but can also include intangibles.) This ratio of price-to-book is referred to in the literature in the inverse — book-to-market (BtM). Thus high BtM stocks were found to outperform low BtM stocks. The paper that described this effect was by Rosenberg, Reid and Lanstein published in 1985, 'Persuasive evidence of market inefficiency'. The sample period for this study was quite short — from 1973 to 1984, and the study did not receive as much attention as those mentioned above. It was not until 1991, when this effect was shown to work in the Japanese market, that this variable generated interest.

Fama & Frech explanation

In 1994, a database of book values for large US industrial firms from 1940 to 1963, free of survivorship bias, was created. This extended the period for which data had previously been available. Using this data, the Fama and French results were generally confirmed. Additional studies of US firms around that time gave support to their findings. Fama and French provided additional evidence in 1998, when they published a paper that studied the BtM effect in several developed countries for the period 1975-95. They also found a reliable BtM effect in emerging markets. Further independent studies by other researchers confirm the results. The explanation — is it risk or mispricing? Based on the papers that support the three-factor model, this framework is widely, but not universally, accepted within the academic community. The topic that is hotly debated within academia is 'Why do these factors go a long way to explaining stock returns?' To date, the two primary explanations are 'risk' and 'market mispricing'. Risk Fama and French argue the risk explanation. The essence of the argument is that risk and return are related and that small cap stocks are riskier than large cap stocks and that high BtM (also known as 'value') stocks are riskier than low BtM (also known as 'growth') stocks. They argue that size and value are independent sources of systematic risk. These risk factors behave in the same way as the factors predicted by APT and ICAPM. Market mispricing The market mispricing argument is well represented by Lakonishok, Shleifer and Vishny (1994), who contend that investors extrapolate past performance into the future. When stocks do not behave as expected, investors are surprised and act accordingly. Thus, the argument goes, stock prices and investors are consistently wrong. So, to increase returns investors should shun growth stocks, because they will disappoint, and buy value stocks, because they will surprise. Interestingly, there are investment managers who utilise the results of the research but use different approaches which take into account their philosophical viewpoint.

Firm size

In a paper published by Banz (1981) an apparent discrepancy was found with CAPM. In this study, Banz found that the stock of small cap firms had higher average returns than large cap stocks. Later studies demonstrated that this small cap effect was distinct from the PE effect described above. This paper was a small revolution, because although some investors had been investing in small caps prior to its publication, the majority of US institutional investors held predominantly large cap stocks in their portfolio. The publication of this paper coincided with the formation of investment managers who focused on the small cap part of the market and a new field of investment was born.

Momentum

In a paper published in 1990, Jegadeesh, 'Evidence of predictable behaviours of security returns', it was found that stocks exhibited short-term momentum; that is, stocks that have done well continue to do well and stocks that have done poorly continue to perform poorly. Jegadeesh found the momentum effect to last for a relatively short period after observation — one month. A later study was able to extend the momentum effect to three months after observation. The studies found that momentum is strongest for poorly performing companies and that positive momentum is relatively weak. Importantly, momentum could not be explained by CAPM. It should be noted that momentum contradicts earlier studies on stock reversals. In the earlier studies, long-term losers outperform long-term winners, whereas momentum says that short-term winners outperform short-term losers. Acceptance of momentum is not universal. While many portfolio managers use momentum as a stock selection tool, many others cite transaction costs as a reason that momentum cannot be efficiently captured (because this is a high turnover strategy — potentially 100% per month).

Leverage

In his 1988 paper, Bhandari found that highly leveraged firms, as measured by debt to equity, have higher average returns than firms that have low leverage. This study covered US stocks over 30 years from 1948 to 1979. Bhandari found that leverage provided additional information on the performance of stocks even after accounting for size and CAPM.

Semi-strong-form efficiency

In semi-strong-form efficiency, it is implied that share prices adjust to publicly available new information very rapidly and in a random fashion, such that no excess returns can be earned by trading on that information. Semi-strong-form efficiency implies that neither fundamental analysis nor technical analysis techniques will be able to reliably produce excess returns. To test for semi-strong-form efficiency, the adjustment of asset prices to previously unknown news must be of a reasonable size and must be instantaneous.

Strong-form efficiency

In strong-form efficiency, share prices reflect all information, public and private, and no one can earn excess returns. If there are legal barriers to private information becoming public, as with insider trading laws, strong-form efficiency is impossible, except in the case where the laws are universally ignored. To test for strong-form efficiency, a market needs to exist where investors cannot consistently earn excess returns over a long period of time. Importantly even if some money managers are observed to consistently deliver above-market returns, this does not refute the strong-form efficiency hypothesis — with hundreds of thousands of fund managers worldwide, even a normal distribution of returns (as efficiency predicts) should be expected to produce a number of exceptional fund managers who consistently produce above average portfolio returns.

Behavioural finance theory

In summary, behavioural finance theory suggests that even in the presence of rational arbitrageurs, irrational investors can distort market prices significantly and for extended periods. Finally, it is worth noting what this behaviourist view implies about the EMH debate. The behavioural finance view is that markets are not efficient, but that this does not necessarily imply that investors can systematically or easily profit from these inefficiencies. EMH argues that since markets are efficient and prices are always right, there is no such thing as a 'free lunch' for investors. Behavioural finance theorists argue that prices are not always right, but that this does not imply the existence of a 'free lunch' and point out that the evidence suggests an absence of 'free lunches' does not imply market efficiency. These then are the two central claims of behavioural finance. Firstly, that at least some investors act irrationally and secondly, that this has significant and sometimes long-lasting effects on market prices.

Statistical factor models

In these models statistical analysis is applied to a set of historical returns to determine portfolios that explain returns. Unlike, macroeconomic factor models and fundamental factor models, the drivers of returns in statistical factor models are not readily observable and are instead extracted from historical return data.

Fundamental factor models

In these models the factors are stated as returns (rather than as return surprises compared to the predicted/expected value as in the macroeconomic models). In fundamental factor models the sensitivities or factor betas are attributes of the underlying security and are, in most cases, standardised. In addition the factor sensitivities are usually specified first and then the factor return calculated/estimated through regression analysis. This is the opposite way around to the macroeconomic models. Factors for these models are defined by 3 broad categories: • company fundamental factors which are related to the company's internal idiosyncratic performance (e.g. financial leverage, earnings growth etc.) • company-share market related factors which include share market valuation data (e.g. PE ratio) and other factors related to the share market performance of the stock. An example of this would be share price momentum • macro-economic factors which include can include sector factors e.g. yield curve sensitivity.

Weak-form efficiency

In weak-form efficiency, future prices cannot be predicted by analysing price history. This means that excess returns cannot be earned in the long-run by using strategies based on historical share prices, or other historical data. Technical analysis techniques will not be able to consistently produce excess returns, though some forms of fundamental analysis may still provide excess returns. This implies that future price movements are determined entirely by information not yet contained in the price series and therefore prices must follow a random walk. The weak-form EMH does not require that prices remain at or near equilibrium, but only that market participants not be able to systematically profit from market 'inefficiencies'.

Common factors

Investment Management | FIN206_T2_v6 © Kaplan Higher Education 4.2 Factor betas The factor betas (Bi) determine how each stock or asset class reacts to this particular common factor (i.e. its sensitivity). For example, although all assets and stocks may be affected by swings in GDP, the impact will differ among stocks and assets. Share prices of cyclical firms (e.g. capital intensive industrial firms) will have a larger beta in terms of unanticipated changes in GDP than non-cyclical firms such as consumer staple stocks. Similarly, there will be different interest rate betas (interest rate factor). Those stocks that are highly sensitive to changes in interest rates or the yield curve, (e.g. banks and insurance companies) have a much higher interest rate factor beta than an entertainment stock. 4.3 Factor models in practice Much of the current practical application of factor models has focused on anomaly studies of equity market returns, that is, what causes excess returns of one stock over another. These studies have identified that there are several common and persistent factors which help explain differences in performance between groups of stocks. This analysis is used as a foundation for many of the modern fund managers' equities investment processes. While these procedures involve the use of factors and seek to estimate factor models, they do not really constitute factor analysis, but instead are simply a form of multiple regression analysis. Many empirical studies of this type were published before APT was even developed. In practice, the fund managers analyse the underlying investments and apply economic theory to derive a set of fundamental factors that should influence security/equity returns. These factors are then tested to check that their relationships with returns are as expected. Typically this process will suggest macro-financial or economic variables. Common factors

EMH vs active management

Investment Management | FIN206_T2_v6 © Kaplan Higher Education Weak-form efficiency In weak-form efficiency, future prices cannot be predicted by analysing price history. This means that excess returns cannot be earned in the long-run by using strategies based on historical share prices, or other historical data. Technical analysis techniques will not be able to consistently produce excess returns, though some forms of fundamental analysis may still provide excess returns. This implies that future price movements are determined entirely by information not yet contained in the price series and therefore prices must follow a random walk. The weak-form EMH does not require that prices remain at or near equilibrium, but only that market participants not be able to systematically profit from market 'inefficiencies'. Semi-strong-form efficiency In semi-strong-form efficiency, it is implied that share prices adjust to publicly available new information very rapidly and in a random fashion, such that no excess returns can be earned by trading on that information. Semi-strong-form efficiency implies that neither fundamental analysis nor technical analysis techniques will be able to reliably produce excess returns. To test for semi-strong-form efficiency, the adjustment of asset prices to previously unknown news must be of a reasonable size and must be instantaneous. Strong-form efficiency In strong-form efficiency, share prices reflect all information, public and private, and no one can earn excess returns. If there are legal barriers to private information becoming public, as with insider trading laws, strong-form efficiency is impossible, except in the case where the laws are universally ignored. To test for strong-form efficiency, a market needs to exist where investors cannot consistently earn excess returns over a long period of time. Importantly even if some money managers are observed to consistently deliver above-market returns, this does not refute the strong-form efficiency hypothesis — with hundreds of thousands of fund managers worldwide, even a normal distribution of returns (as efficiency predicts) should be expected to produce a number of exceptional fund managers who consistently produce above average portfolio returns. 5.1 EMH vs active management

Capital allocation

Investors want to earn the highest return possible for the level of risk that they are prepared to take. So how does an investor allocate capital to maximize their investment utility, or the risk-return profile/preference that yields the best outcome for this trade-off? The easiest way to examine this is to consider a portfolio consisting of two assets: a risk-free asset that has a low rate of return but no risk, and at the other extreme a risky asset that has a higher expected return for a higher risk. Investment risk is measured by the standard deviation of investment returns. By varying the relative proportions of the two assets, an investor can earn a risk-free return by investing all their capital in the risk-free asset or potentially earn the security return by investing entirely in the risky asset. Furthermore, assuming the investor can borrow to invest (gear) the investor can potentially earn even higher returns. This concept; the apportionment of funds between risk-free investments (such as cash) and risky assets (such as stocks), is known as capital allocation (as opposed to asset allocation). The simplest case of capital allocation is the allocation of funds between a risky asset and a risk-free asset. The risk-return profile of this two-asset portfolio is determined by the proportion of the risky asset to the risk-free asset. If this portfolio consists of a risky asset with a proportion of wi, then the proportion of the risk-free asset must be (1 - wi) and the portfolio return (rp) would equal: rp = wiri + (1 - wi)rf

Calculating portfolio risk — variance

It was in calculating the risk of a portfolio of risky securities that Markowitz's techniques made their most obvious and helpful breakthrough. If the distribution of returns of each security could be described by the standard deviation and mean of each distribution, then the same could be said of any portfolio containing those securities. However, calculating the combined portfolio risk is not simply a matter of averaging the risks of the component securities. Some securities will achieve returns in excess of their expected returns while others will achieve returns below their expected return. For example, mining companies will benefit from high commodity prices, but the profits of consumers of commodities, for instance car companies, will be adversely affected. Thus Markowitz was forced to account for the relationship between the risks of different securities (the correlation) as well as the risks of each security in isolation.

Factors influencing a stock's beta

Many factors are relevant in determining a security's relative volatility (beta). Examples of these factors are discussed below. Riskiness of the company's activities High-risk mining companies (with exploration risk) have higher betas than 'bread and butter' industrial companies. Gearing The more highly geared a company is, the greater its beta. When the economy is growing and share prices are rising, a highly geared company will earn a return on borrowed funds well in excess of interest costs — shareholders benefit and the shares outperform. In recessionary conditions however the company's reduced earnings will be largely taken up by interest costs that still need to be paid regardless of weaker economic conditions, and the shares will underperform. Sensitivity of the company's sales to the business cycle The more sensitive a company is to movements in the business cycle, the greater its beta. A stock like Woolworths has a low beta because sales are not very sensitive to business conditions as it sells mostly staples or necessities which are consumed regardless of economic conditions. However consumers are unlikely to buy twice as much if their incomes double. Whereas a stock such as Boral is much more responsive to changes in business conditions and therefore has a higher beta. This is because the building industry is much more cyclical than the economy as a whole. Stronger economic conditions tend to be associated with more construction activity.

Portfolio risk

Markowitz saw risk in investment in terms of uncertainty. If the outcome of an investment was certain, the portfolio manager would identify the stock with the highest expected return and place the entire portfolio in that stock. He argued that what made investment more interesting was that the actual return from any investment was uncertain ex ante (i.e. before the event, forward-looking, forecast). Therefore, while the expected return was the best estimate of that return, any analysis of the strategy must consider the likelihood that the estimate was wrong. Statisticians, continually confronted with such issues, have a highly-developed mathematical armoury to cope with stochastic variables. Markowitz borrowed a notion from that armoury — the standard deviation.

Factor models in practice

Much of the current practical application of factor models has focused on anomaly studies of equity market returns, that is, what causes excess returns of one stock over another. These studies have identified that there are several common and persistent factors which help explain differences in performance between groups of stocks. This analysis is used as a foundation for many of the modern fund managers' equities investment processes. While these procedures involve the use of factors and seek to estimate factor models, they do not really constitute factor analysis, but instead are simply a form of multiple regression analysis. Many empirical studies of this type were published before APT was even developed. In practice, the fund managers analyse the underlying investments and apply economic theory to derive a set of fundamental factors that should influence security/equity returns. These factors are then tested to check that their relationships with returns are as expected. Typically this process will suggest macro-financial or economic variables.

Topic learning outcomes

On completing this topic, students should be able to: • explain the portfolio manager's objectives in terms of mean and variance, using the paradigm of modern portfolio theory (MPT) • discuss the features of efficient portfolios and the concept of the efficient frontier as it applies in MPT • explain the fundamental concepts of the capital asset pricing model (CAPM) • demonstrate how security risk can be decomposed in a portfolio context • describe a range of applications for CAPM • explain the fundamental concept of betas • explain the rationale behind APT • define the concept of a factor as it relates to APT.

Portfolio size — number of stocks

One property of an efficient portfolio is that it may contain a relatively large number of stocks. This means that forming an efficient portfolio and maintaining the desired risk level through time can be an expensive process — expensive in the sense that trading uses up money by paying brokerage fees and other transaction costs, in addition to the cost in terms of portfolio management time. However, as is evident in Figure 10, substantial risk reduction can be achieved by choosing about 20 to 30 stocks. This finding has been replicated in many academic studies over the years. Thus the choice of portfolio size can be seen as a trade-off between the benefit of having an efficient portfolio and minimising transaction costs. It is important to note that this relationship between the standard deviation of a portfolio and the number of stocks in a portfolio is not entirely stable (i.e. it changes through time). If a portfolio about half as risky as the market was desired, then one would choose a number of stocks, say 20-30, with betas of around 0.5. Provided there were sufficient stocks to diversify any non-market risk, then the low-risk objective would be achieved with a relatively manageable number of stocks.

Further developments

Portfolio management theory is constantly developing. For example, Elton et al. (2001) found a link between equity and fixed income markets. The idea is that if there are certain factors that have explanatory powers in the equity market then the same factors may have explanatory powers in the fixed income markets — the concept of a common set of explanatory factors for all markets. They find that size and BtM have some explanatory power for that part of a bond's return that is due to changes in risk premiums. Perez-Quiros and Timmermann (2000) provide evidence that small cap firms have high average returns because they are more affected by tight credit market conditions. The argument may be summarised that small firms do not have the same access to credit as do large firms. When economic conditions are difficult, credit becomes tight, affecting smaller firms more than larger firms. For smaller firms to attract capital they must reward investors for this sensitivity to a credit-related factor. In his seminal work (provided as 'Further resource 3'), Chabbra (2005) looks to provide some practical applications of behavioural portfolio theory to wealth managers. In doing so, he argues for the adoption of segregated goals-based accounts (or buckets) in constructing client portfolios

Covariance

Portfolio theory provides us with guidance about how to select stocks to diversify away the non-market risk. This selection should be based on how stock returns are expected to move in relation to each other (i.e. covariance). It should be noted that covariance is related to, but not identical to, correlation. They are variations of the same formula. Ideally stocks with a lower covariance are more desirable as they are more diversifying.

Arbitrage pricing theory (APT)

Researchers have long noted that securities which have common characteristics (or factors) perform similarly. For example, for equities it may be the industry they operate in, their exposure to the economic cycle, growth characteristics or the impact of commodity prices. Arbitrage pricing theory (APT) introduces factors other than simply the total market return (total market beta) to explain security returns. The theory involves the development of multi-factor models with the aim of identifying mispriced securities. APT assumes there are a limited number of non-diversifiable factors which drive security prices. This is different from the CAPM. The CAPM assumes there is only a single factor (the market portfolio) and therefore only one beta (to the overall market). APT can potentially take into account more complex relationships between risk and return. In practice, this should allow more accurate and nuanced estimates of expected returns for stocks. Importantly, these factors are not security or stock-specific but are expected to have an impact on the return of all assets or stocks. The stock-specific factors are assumed to be diversified away in a large portfolio.

Applications of MPT

That said MPT and its offshoots continue to be frequently seen in the following three areas: • asset allocation decisions • decisions relating to the active risk of a portfolio • option pricing.

CML and Sharpe ratio

The Sharpe ratio is a measure for calculating risk-adjusted return, and this ratio has become the industry standard for such calculations. The ratio was developed by William F. Sharpe, a Nobel Laureate. The Sharpe ratio is defined as the average return earned in excess of the risk-free rate per unit of volatility or total risk. By subtracting the risk-free rate from the average return, the performance gained by taking risks away from the risk-free asset can be isolated. Intuitively the calculation shows that a portfolio engaging in 'zero risk' investment, such as the purchase of Australian government bills (for which the expected return is the risk-free rate), has a Sharpe ratio of exactly zero. Generally, the greater the value of the Sharpe ratio, the more attractive the risk-adjusted return. MPT states that adding assets to a diversified portfolio that have correlations of less than one with each other can decrease portfolio risk without sacrificing return and the diversification achieved will serve to increase the Sharpe ratio of a portfolio.

MPT assumptions

The assumptions of MPT include: • normal distribution of asset returns — asset returns are distributed around an average return in a 'bell-shape' manner • rationality of investors — investors will look to maximise returns • risk aversion of investors — investors prefer lower returns with known risks over higher returns with unknown risks • homogenous investing — investors have the same information on investments and build the same view on expected returns of those investments • cost-free investing — transaction costs and taxes are not considered as part of MPT.

Why is risk important?

The challenge when considering why risk is important is that there is not a good — and universally agreed — understanding of risk. What one person may consider a risk, another person may not. Even Markowitz acknowledged the shortcomings of using standard deviation as a proxy for risk. People may look at risk differently during the time they are making and losing money. Some scholars and practitioners argue that investors should not be concerned by outperformance (the actual return exceeding the expected return), but the variability of underperformance. Hence, there are measures that focus purely on downside risk such as semi-standard deviation and the Sortino ratio. Beyond the relative comfort of investors, why is risk important? The answer is that while risk and return are related, not all risks lead to a return. Therefore, if the investment practitioner can reduce the risk of the portfolio without impacting on the expected return of the component assets of the portfolio, the portfolio's overall return will increase.

Factor betas

The factor betas (Bi) determine how each stock or asset class reacts to this particular common factor (i.e. its sensitivity). For example, although all assets and stocks may be affected by swings in GDP, the impact will differ among stocks and assets. Share prices of cyclical firms (e.g. capital intensive industrial firms) will have a larger beta in terms of unanticipated changes in GDP than non-cyclical firms such as consumer staple stocks. Similarly, there will be different interest rate betas (interest rate factor). Those stocks that are highly sensitive to changes in interest rates or the yield curve, (e.g. banks and insurance companies) have a much higher interest rate factor beta than an entertainment stock.

Issues affecting the efficient frontier

The instability of the minimum variance frontier is a problem because; • the inputs are not exact and the uncertainty limits the reliability and usefulness of the resulting frontier • the changes can lead to frequent, time consuming and costly (transaction costs) re-balancing by the portfolio manager. because it is based on expectations of the future, in reality each portfolio manager will have their own estimates of risks and returns, different tax rates (the analysis assumes zero taxes) and their own time horizon. Therefore each individual's efficient frontier will be different. So in essence there is not one universal "market" efficient frontier but heterogeneous expectations and forecast horizons • the portfolio manager would almost certainly face transaction costs when moving from the current portfolio holdings but the analysis assumes zero transaction costs. Thus the portfolio manager has to balance the certain costs of trading and the uncertain benefits of moving the portfolio to a more efficient position. In this sense the efficient frontier may be more of a 'range' rather than a thin line • the portfolio manager may face constraints on how the portfolio can be invested and is not able to invest in all available securities.

Capital allocation lines

The investment opportunity set that is created by proportioning a portfolio between the risk-free asset and a risky asset graphs as a straight line when plotted on a risk-return graph (see Figure 8 below). The line begins at the y-intercept point of rf and cuts through the point representing the risky asset (i.e. the point that represents the expected return and expected risk of the risky asset). This line is known as a capital allocation line (CAL) and is the graph of all possible combinations of the risk-free asset and the risky asset with the formula for the line as shown earlier. Each risky asset, with its own expected risk and return, will form a separate CAL. This means there are as many CALs as there are assets.

Diversification

The key benefit of diversification is the removal of diversifiable risk from a portfolio. The relevant risk of each security therefore will be its contribution to the risk of the portfolio. When portfolio risk cannot be further lowered by diversifying, the remaining risk is called market risk. Market risk is often referred to as non-diversifiable risk, or systematic risk. Diversifiable risk is likewise sometimes referred to as non-systematic risk. Systematic risk may apply to an entire country, industry or economy. It is impossible to reduce systematic risk for the entire global economy. For example, oil companies (oil industry as a whole) face the systematic risk of disruptions to the global supply of oil caused by a war in the Middle East. An investor may mitigate this risk by investing in both oil companies and companies that have nothing to do with oil. A non-systematic risk is unique to a certain asset or company. An oil company may have poor performance (e.g. due to a senior management mistake) and the investor can mitigate it by buying into other oil companies. The important point here is that if an investor diversifies the portfolio they reduce the impact on the portfolio from the diversifiable risk component of each security's total risk in the portfolio and are left with the non-diversifiable contribution to portfolio risk. Therefore it is postulated that investors should expect no reward by accepting risk that may be diversified away. In other words, investors should not expect to be rewarded for accepting idiosyncratic (or stock-specific) risk in a total portfolio context.

Modern portfolio theory (MPT)

The origins of Harry Markowitz's MPT can be traced to a single doctoral thesis written in 1952. At the time it was described as 'not Economics ... not Business Administration ... [and] not Mathematics', a problem for a young postgraduate seeking recognition for an idea that would eventually lead to him being joint winner of the 1990 Nobel Prize in economics. Essentially, Markowitz's contribution to investment theory was to reformulate the question being asked by portfolio managers in a way that allowed them to view risk in a portfolio context. Until Markowitz, analysts would penalise the attractiveness of an investment by applying a heavier discount to risky securities. That said, they could take no explicit account of the effects of a security's potential diversification benefit on overall portfolio risk. Additionally, Markowitz also gave portfolio managers a tool to help them compute how much to invest in different securities. This section of the topic discusses these contributions and then presents some thoughts on applying MPT in practice. As with many 'scientific discoveries', there was a second almost simultaneous but less heralded discovery by Roy, whose article 'Safety first and the holding of assets' appeared in Econometrica, volume 20 in July 1952.

Standard deviation

The standard deviation of a set of observations describes the spread of those observations around the mean or expected return. Coupled with this is the assumption that the population being observed is normally distributed. Features of a normal distribution that make it useful in statistical analysis include: • it is symmetrical (i.e. as much above the median as below or alternatively, the mean and the median are equal) • the entire distribution can be described by two statistics: the mean and the standard deviation • it is stable under addition (i.e. if the distribution of stock returns individually is normal, then the portfolio comprising those stocks will also have returns that are normally distributed). The standard deviation of returns represents a very restrictive interpretation of the notion of risk in investment. Some argue that it is entirely counter-intuitive, since it also regards unexpectedly high returns (i.e. returns in excess of the investor's expected return) as risk.

CAPM assumptions

There are a number of important assumptions which underlie CAPM. Some of these are similar to those found in MPT and include: • rationality of investors — investors will look to maximise economic utility • risk aversion of investors — investors prefer lower returns with known risks over higher returns with unknown risks • diversified portfolios — investors are broadly diversified across a range of investments • borrowing — investors can lend and borrow unlimited amounts at the risk-free rate • homogenous investing — investors have the same information on investments and build the same view on expected returns of those investments • cost-free investing — transaction costs and taxes are not considered as part of MPT • liquid and divisible investments — assets are highly divisible into small parcels.

Asset allocation

There are two main reasons why mean/variance analysis continues to be used in asset allocation decision-making: • by definition, it is a decision where the number of inputs is lower than is the case when each security is individually considered. This reduces the complexity and size of the calculation • asset allocation decisions are often (though not always) longer term decisions than those pertaining to individual securities. As such, it is sometimes argued that the actual outcome is more likely to approximate the stochastic estimates of expected return (i.e. the risk).

Example: CAPM as a pricing model

Two securities are expected to be worth $5 in one year's time. Their current prices would differ if they had different risk. Security A has a beta of 0.5, while that of Security B is 1.5. Further, the risk-free rate is running at 5% while the market risk premium (rm - rf) is 4%. With this information we would expect Security A to be priced to provide a return of 7%, that is: = 5 + (4 × 0.5) = 7% This means the current price of A should be $4.67 (i.e. 5/1.07). If it were priced at, for example, $4.00 then it would be providing an expected return of 25% — more than one would expect for a security with a beta of 0.5. It would be under-priced so demand pressure would force its price up to $4.67. By a similar process we can establish that Security B would be priced at $4.50, and provide an expected return, normal for its beta of 1.5, of 11%. = 5 + (4 × 1.5) = 11%

Additional stocks — impact on overall portfolio characteristics

When recommendations are made to include more stocks in a portfolio or when a rights issue or a new placement of shares is imminent, the portfolio manager can use the concepts outlined above to assess the impact of such a change on the risk profile of the portfolio. The addition of stocks with betas higher than the beta of the portfolio will increase the portfolio's beta. The converse holds for stocks with betas lower than that of the portfolio. Increasing the quantity of an existing stock in a portfolio will move the beta of the portfolio in the direction of that stock's beta. This would occur if rights issues, for example, are taken up.

Practical Issues when using the MPT

When using the mean-variance framework of MPT two major problems arise: • the difficulty of estimating inputs for the mean variance optimisation • the instability of the minimum variance frontier due to the optimisation process sensitivity to inputs (covered earlier). In calculating the inputs of the MPT framework two issues arise: • the number of estimates needed — for example, if there are 100 assets the number of parameters needed to be estimated is over 5150 mean returns, variations and correlations • the extent of estimation errors — the greater the level of potential errors in a framework reduces the usefulness and reliability of any forecasts Given the number of parameters that need to be estimated, analysts have sought ways to reduce the number of inputs required. An easier way to compute the variances and co-variances of security returns uses the insight that security returns are likely to be related to each other through their correlation with the overall market (capital asset pricing model) or a limited number of variables or factors (arbitrage pricing theory). These security pricing models are covered below.

Applying beta to a portfolio

applying beta values, portfolio managers must first form a view on the likely direction of the overall share market. If they believe that prices/markets are about to rise, they should bias their portfolio weighting towards high beta stocks (i.e. stocks forecast to have high beta). This portfolio should outperform in a rising market (although note that the stock-specific risk may also be high). Alternatively, if portfolio managers believe that share prices are likely to fall but they wish to retain an exposure to shares, they should buy low beta stocks (i.e. stocks forecast to have low beta). While share prices of such stocks will fall in a bear market, they will fall less sharply than the overall market and therefore outperform in relative terms.

Tracking error (active returns)

often impose constraints on the level of active risk that a portfolio manager can take in managing the portfolio of securities. In many cases, active risk is characterised as tracking error. Tracking error is defined as the standard deviation of active returns, where active (or excess) return is defined as the return of the portfolio less the return on the benchmark over the same period. As with calculations of risks in total return space, risks in relative return space have to consider not just individual risk (tracking error in this case) but also the correlation of the active returns. As with the notions of risk described above, it is important to differentiate between ex ante risk and ex post volatility. An estimate of ex ante tracking error should apply at a particular point in time based on the holdings of the portfolio at that time (though there are a variety of ways that this could be calculated). As such it can be thought of as a measure of diversification: a portfolio with a high ex ante tracking error will necessarily look very different from a portfolio composed similarly to the (by definition) diversified benchmark. There is, however, no necessary link between ex ante tracking error and ex post tracking error, particularly if the portfolio manager is trading the securities in the underlying portfolio (and thereby causing the ex-ante tracking error to change continuously). The ex post tracking error and returns for investment managers can be shown in performance analysis reports (based on actual return and risk outcomes), but it is impossible from this data to infer the ex-ante tracking errors for the managers as the actual market conditions or outcomes will more than likely differ from those used to forecast ex ante.

Challenges to fama and french model

• Data mining: Data mining is simply explained by example. If there are enough people looking at the data for a sufficient length of time, someone will find a pattern, just by chance. So data mining is finding a pattern that does not exist yet. Indeed, with the advent of even more powerful desktop computers and the availability of masses of digitally stored pricing data, the ability to data mine has risen exponentially in recent years. The argument used against Fama and French was that, as their analysis was based on the previous findings of researchers, the explanatory power of these variables was due to data mining. • Survivorship bias: This statistical phenomenon is where only the survivors are added to or remain in the database and the losers are excluded. The trouble with survivorship bias is that looking forward we are unable to predict which companies (or investment managers) will be the winners and which will be the losers, so the whole opportunity set is available looking forward. With a survivorship-biased database only the previous winners are shown, which biases the result upwards. The previously mentioned Compustat database is survivorship biased in its selection criteria. This selection criterion excludes those companies that fail during the review period. As Compustat was one of the databases used in the Fama and French research, this accusation was levelled at the research. • Beta estimation: This argument states that the estimation of beta is affected depending upon whether monthly or annual returns are used. The response to these challenges was to us


Related study sets

Mental health questions (ch 6, 7, 8) exam 1

View Set

Algebra 2B - Unit One: Exponential and Logarithmic Functions, Part 1

View Set

The endoplasmic reticulum: RER protein synthesis

View Set

unit 1 checkpoint exam - wrong answers

View Set

GCH 332 - Chapter 4 (Sleep Physiology)

View Set