Luo 2017 version b
Dye (2001)
Central premise of the disclosure literature In this essay, Dye presents a critique of Verrechia's "Essay on Disclosures". Dye laments the lack of a unifying theory of disclosures in Verrechia's essay. In his view, voluntary disclosure is a special case of game theory in which managers disclosure positive news and withhold negative news. He goes on to critique each of the sections of Verrechia's essay. Discretionary-based disclosure In this section, Dye addresses the "full disclosure" result of Grossman (1981), and Grossman and Hart (1980) that sellers have an incentive to fully disclose his information given the following conditions: (1) the buyer know that seller has information; (2) all buyers interpret the disclosures or nondisclosures the same way; (3) seller can credibly disclose the information he has received; (4) sellers incurs no cost in making a disclosure. He tries to rationalize why this "unraveling" result fail to hold in practice as firms continue to practice nondisclosure of a substantial amount of information. In contrast to Verrechia who believes that it is unlikely that nondisclosures are based on violation of the premise (1) that managers are misinformed, Dye expresses his skepticism that buyer are always informed of manager's possession of information. Regarding the second premise that all buyers interpret the disclosure the same way, Dye points out the distinction between sophisticated investors with unsophisticated investors. Sophisticated investors are better able to distinguish whether sellers did not disclose because they did not receive new information or whether they are deliberately holding back information. Thus, when a firm has high levels of sophisticated investors, the sellers should be more likely to disclosure. Regarding (3), Dye put forth the most important determinant of the credibility of management disclosures- the Revelation Principle, which states that contracts among parties should be designed to incentivize all parties to disclose their private information truthfully while ensuring all parties receive at least as high expected utility as under the original contracts. When the revelation principle applies, the demand for earnings management should disappear. Association-based disclosures Appropriateness of PELRE models for representing accounting information In this section, Dye criticizes the section of Verrechia's essay on "disclosure association studies" (DAS). He believes Verrecchia neglected to mention many defects in the DAS literature and that he focuses too much on the derivations and not the economics driving the results. Dye argues that the PELRE model predict no accounting information will exist. In particular, he questions the appropriateness of the pure exchange linear rational expectations (PELRE) pricing framework for representing accounting information. He draws parallels to an insurance market for covering people who get broken legs. In this hypothetical economy, some people are destined to get broken legs while others are destined to not get broken legs. The insurance market for covering people with broken legs will collapse if there exists some soothsayers who can provide predictions on who will get broken legs. In this case, the presence of soothsayers is a strict social bad and they would be regulated/driven out of existence. Replacing the broken leg economy with a "pure exchange economy with risky assets", replacing the insurance market with "market for securities", and replacing the soothsayer with "supplier of accounting information", Dye reasons that the prediction in the broken leg economy that soothsayers will not exist would translate into the prediction that financial reports and other public accounting information will not exist. In that sense, PELRE models are failures and are not appropriate for studying accounting disclosures. Dye believes this defect of the PELRE model could not be remedied by "tweaking" its assumptions. One such tweaking involves expanding the model to allow investors the opportunity to acquire private information prior to the opening of the security market. This helps to reduce the amount of duplicative information acquisition activity and diminish the adverse risk-sharing effects arising from the acquisition of information in advance of the opening of the security market. However, it also creates a new problem as new securities markets might open during the time interval between the acquisition of private information and the release of the public information, making it less likely that public disclosures will have much benefits.
Beyer, Cohen, Lys, and Walther (2010)
Main Summary In this paper, the authors review the literature on firm's financial reporting environment focusing on three areas: manager's voluntary disclosure decisions, disclosure regulations, and information intermediaries/analysts. The authors view the information environment to be endogenously shaped by information asymmetry between capital providers and entrepreneurs and agency problems resulting from the separation of ownership and control. In conditions where the managers do not voluntarily disclose all their private information, disclosure regulations are necessary. Sources of corporate financial information The authors establish the relative contributions of different sources of information to the information reflected in stock prices by estimating the following time-series regression for each sample firm: where CAR is the log abnormal return for the calendar quarter, CAR(EA) is the three day log abnormal return centered on the earnings announcement, CAR(PRE-EA) is the three day log abnormal return centered on earnings pre-announcements, CAR(MF) is the three day log abnormal return centered on management forecast, CAR(AF) is the three day abnormal return centered on analyst forecasts, and CAR(SEC) is the three day abnormal return centered on any SEC-form filings date. Focusing on the partial R2 of each of these sources of information, they find that for the average firm, 28.37% of the quarterly stock return variance occurs on days when accounting disclosures are made. Out of total disclosures, management forecasts provide 55% of the disclosure information, earnings pre-announcements provides 11% of disclosure information, analyst forecasts provide 22% of disclosure information, earnings announcements provide 8% of disclosure information, and SEC filings (mandatory disclosures) provide only 4% of the disclosure information. The interpretation of these results should take into account the interdependence and complementarities of these different sources of information. Voluntary disclosures Firms will voluntarily disclose all their private information under the following conditions: (1) disclosures are costless; (2) investors know that firms have private information; (3) all investors interpret the firms' disclosure in the same way and know how investors will interpret that disclosure; (4) managers want to maximize their firms' share prices; (5) firms can credibly disclose their private information; (6) firms cannot commit ex-ante to a specific disclosure policy (Grossman and Hart 1980, Grossman 1981, Milgrom 1981, Milgrom and Roberts 1986). Manager have three main motives for providing voluntary disclosures. First, they are motivated by the need to raise external funds. Various studies have shown that firms tend to increase their disclosures when they are raising external funds (debt and equity offering). Second, managers are influenced by their compensation contracts and corporate control contests. They tend to opportunistically time disclosures to maximize their stock option awards. Third, voluntary disclosures affect stock prices through disclosure costs, liquidity, and the cost of capital. Firms are less likely to provide disclosures when they are in competitive industries with high proprietary costs. Higher quality disclosures is related to a reduction in information asymmetry, an increase in liquidity and (potentially) a reduction in the cost of capital. The institutional setting is also an important determinant of manager's disclosure strategies and could influence disclosures through both litigation cost and governance mechanisms. The threat of litigation could conceivably encourage managers to disclose more in order to preempt litigation though the relation between litigation risk and disclosures suffers from endogeneities. Firms with better corporate governance mechanisms are associated with more disclosures. There are numerous potential proxies for measuring the extent of voluntary disclosures ranging from survey rankings, researcher-constructed indices, measures from natural language processing technologies, and properties of the firm's reported earnings. An example of a survey ranking is the AIMR rankings, which is subject to endogeneity concerns. AIMR rankings and self-constructed indices also capture both voluntary and mandatory disclosures which makes it difficult to interpret the results.
Bradshaw (2011)
Overview of the Literature In this review paper, Bradshaw presents a survey of the analyst forecast literature and presents some of the common conceptions/misconceptions regarding analysts' role in the capital markets. Analysts add value through their analysis of firms, as reflected in their valuation of the firm and their sell recommendations. While we can observe and assess the end result of their analysis (earnings forecast accuracy, recommendation profitability), we cannot observe the "dark box" of how they conduct their analysis. We also cannot directly observe the qualitative factors that affect their information gathering, analysis, and communication process (eg. their ability, incentives, integrity, and responsiveness to clients). The best that academics could do in the past was to examine the correlations between inputs (prices, financial statements), outputs, and conditioning the variables to understand the analysis process. More recently, studies have begun to penetrate the black box how analyst ability and incentives affect decision/analysis process. Interest regarding analysts have grown from their earnings forecast activity (eg. their usefulness compared to time-series forecasts) toward other activities performed by analysts. The earliest research in this field developed as a byproduct of capital market research on the relations between accounting earnings and stock prices. Analyst earnings forecasts found its use as a substitute for time-series modeling of "expected" earnings that was used in tests of market efficiency and value relevance of earnings. A series of studies during the 1970s and 1980s compared the efficacy of analyst forecast' vs the time-series models. In particular, Fried and Givoly (1982) definitively concluded that analyst forecasts is a better proxy for expected earnings than time series-models. Brown et al. (1987) further clarified the source of analyst superiority. Building on the research comparing analysts relative to time-series models, the literature moved on to consider refinements to the research design to identify factors that are correlated with incremental earnings forecast accuracy (O'Brien 1988, Stickel 1990, Brown 1991, Clement 1999). Some studies looked at the association of analysts' forecasting activities with stock prices (O'Brien 1988, Philbrick an Ricks 1991). This further evolved into testing whether analysts are efficient with respect to information cues. Analysts are found to either underreact or overreact to various information such as past earnigns changes, prior stock price changes, serial correlation in quarterly earnings (De Bondt and Thaler 1990, Lys and Sohn 1990). While the analyst literature traditionally focused on earnings forecast, in recent years the literature has moved on to examine other aspects of analysts' behavior, incentives, and conflicts of interests. These include studies covering analyst coverage decisions, dispersion and its association with prices and accuracy, changes in the regulatory environment, analyst recommendations, growth projections, and target prices. In the second half of the paper, the author presents ten conceptions/misconceptions about the analyst literature. I will briefly summarize each of them below
Hirshleifer (2001)
Research Question Hirshleifer starts out by surveying the various psychological biases using the framework of heuristic simplification, self-deception, and emotional loss of control. Under heuristic simplification, he covers biases such as narrow framing, anchoring, representativeness, and conservatism. Narrow framing refers to analyzing problems in extremely isolated compartments. For example, biases such as mental accounting causes individuals to track gains and losses related to decisions in different mental accounts and reexamine each account separately. Anchoring occurs when individuals are unduly influenced by the statement of the problem. Representativeness occurs when individuals are over-reliant on similarity to past evidence. Conservatism reflect individual's delay in updating their beliefs to new information. Under the framework of self-deception, the author reviews the well-known phenomenon of overconfidence, bias self-attribution, cognitive dissonance, confirmatory bias and hindsight bias. Similarly, under the time preferences and self-control framework, he discusses biases such as conformity effect, fundamental attribution error, false consensus effect, and the curse of knowledge. In the next section, the author presents evidence of mispricing in five categories- return predictability, the equity premium puzzle, actions taken in response to mispricing, actions taken to create mispricing, and evidence of investment errors. The literature have identified return predictability arising from factor risk measures, price and benchmark value measures (size, book/book etc), past returns (momentum and reversal), public and private news events, and mood proxies such as changes in daylight saving time. These return predictabilities have dual explanations based on either risk or mispricing and are widely debated. In the last section, the author presents asset pricing theories based on imperfect rationality.
Fama (1998)
Research Question In this paper, Fama presents a defense of market efficiency and point out the flaws in the anomalies literature. Fama argues that while recent studies on long-term returns find long-term underreaction or overreaction to information, the underreactions is as frequently as overreactions. Since the anomalies are split roughly evenly between underreactions and overreactions, Fama suggests that they are consistent with market efficiency. The literature does not identify overreaction or underrreaction as the dominant phenomenon and therefore the random splits predicted by market efficiency holds up well. Fama urge researchers to develop an overall perspective on long-term return studies. He point out that the behavioral camp rarely test an alternative to market efficiency. Instead, the alternative is vague market inefficiency, which is not a valid alternative. Main Argument Stock prices adjust slowly to information so one must examine returns over long horizon to get a full view of market inefficiency. However, evidence against market efficiency from the long- term return returns is flawed. Test of market efficiency is a joint test of the model as well, which it subject to "bad model" errors. The bad-model errors in expected returns grow faster with the return horizon, which makes it problematic to draw inferences about long-term returns. Fama suggest that long-term returns should be based on average or sums of short-run abnormal returns rather than buy and hold abnormal returns. Fama acknowledges that market efficiency has its flaws like all models, but it can only be replaced by a better model of price formation that can be empirically tested. The alternative model must specify biases in information processing that causes the anomalies and must be able to explain the observed prices better than the current model. The only two papers that present behavior models for overreaction and underreaction do not work well for other anomalies. Both models predict post-event return reversal, which is as frequent as post-event return continuations. Fama argues that the literature does not cleanly lean toward one of the two behavioral alternative to market efficiency as it is not clear why the market overreact in some circumstances and underreact in others. Thus, the market efficiency hypothesis can best address this problem by attributing these occurrences to chance. He claims that long term return anomalies are sensitive to methodology as they tend to become marginal or disappear when exposed to different models for expected returns or when different statistical approaches are used to measure them. Thus, most long term return anomalies can be attributed to chance. He also argue that some of the anomalies such as the long-term return reversals of DeBondbt and Thaler (1985) and the contrarian returns of Lakonishok et al (1994) can be captured by a multifactor asset pricing model. He suggest that researching looking for anomalies are opportunistic and shows little sensitivity to the alternative hypothesis problem. Studies in the behavior camp Long-term post-event returns in the overreactions camp- IPOs (Ritter 1991, Loughran and Ritter 1995), seasoned equity offerings (Spiess and Affleck 1995). Long-term post-event returns in the underreactions camp- post-earnings announcement drift (Ball and Brown 1968, Bernard and Thomas 1990), momentum effects (Jegadeesh and Titman 1993), divesting firms (Cusatis 1993), stock splits (Desai and Jain 1997), open-market share repurshares (Ikenberry et al. 1995), dividend omissions (Michaely 1995).
Verrecchia (1983)
Research Question In this paper, Verrechia presents a model to show how managers of a risky asset decides whether to withhold or disclose information in the presence of traders who have rational expectations about his motivation. In this models, the manager may either disclose or withhold information about the true liquidating value of a risky asset. If he discloses, the value of the asset is reduced by a proprietary cost c. If he doesn't disclose, traders may make inferences on the basis of the non- disclosure. The models show that there exists a disclosure equilibrium as long as the proprietary cost is positive. The existence of the proprietary cost is essential for there be a disclosure equilibrium because as long as proprietary cost exists and information is withheld, traders are unsure whether it was withheld because the information represents bad news or whether the disclosure cost is too high. As the proprietary cost increases, so does the range of possible interpretations of the withheld information, which gives managers more discretion and increases the threshold level of disclosure. The greater the disclosure cost associated with the information, the less the cost to withholding of information. The market discounts the withholding of information less heavily as the threshold level of disclosure rises Model Assume a model in which the manager exercises discretion over the disclosure of information to traders. Managers are in possession of some information signal regarding a risky asset, the existence of which but not the content is known to traders. The signal is represented by a random variable Ῡ which contains the true value of the fixed risky asset plus some noise ϵ. The manager's decision to disclose or withhold the information depends on the effect of the decision on the price of the risky asset. Letting Ω be the set of information available to traders, the market price for the risky asset is given by P(Ω), such that The manager's objective function is to maximize the price of the risky asset. If the manager discloses his information signal, then the liquidation value of the asset is reduced by the proprietary cost c. The price of the risky asset will adjust to If the managers decide to withhold the information, traders may make inferences on the basis of this absence. They may infer that the realization Ῡ=y is below some point x on the real line such that the price of the risky asset is Where the point x is the threshold level of disclosure such that when the y<=x, the manager withholds and when y>x he discloses. The disclosure equilibrium is the threshold that satisfies the following two conditions: 1. The manager's choice of x maximizes the price of the risky asset for each observation Ῡ=y 2. When the manager withholds information, traders conjecture that the observation Ῡ=y made by the manager has the property that y<=x. In other words, the manager maximizes the value of the risky asset based on how trader may interpret his decision to either withhold or disclose the information. Traders also infer that managers would withhold information when y<=x. In the next section, Verrecchia prove that this discretionary disclosure equilibrium would exist whenever the proprietary cost is positive. When a proprietary cost exists, traders are not sure whether the withholding of information is due to the realization Ῡ=y being low or whether the realization is high but the proprietary cost is too high to justify the disclosure. This suggest that the threshold level is an increasing function of the proprietary cost.
Ball and Bartov (1996)
Research Question In this paper, the authors investigate the extent to which market utilize information from past earnings announcements by looking at the earnings expectations model that is implied by the market reaction to seasonally differenced quarterly earnings. Despite prior studies that have assumed a naïve seasonal random walk model, this paper provides evidence against the market using a naïve seasonal random walk model and instead the market act as it was aware of the sign of the serial correlation at each of the four lags. In a regression of the SUE on its four lagged values, the coefficients follow the (+,+,+,-) pattern which is consistent with serial correlation. The author first replicated the Bernard and Thomas regression of CAR on the four lagged values of SUE, which does not control for current SUE. Then, controlling for SUE, he finds the coefficients on the four lagged SUE variables to have the predicted (-,-,-,+) sign, which is consistent with investors being aware of the serial correlation in quarterly earnings. The authors are interested in examining the degree to which the price reaction to current earnings can be explained by the serial correlation in seasonally-differenced quarterly earnings. While the market does incorporate into prices the serial correlations in seasonally-differenced earnings, it underestimates the magnitude of serial correlation by approximately 50% (45%, 50%, 119%, 22% at lags 1-4 respectively). This is calculated by comparing the actual coefficient on the SUEs with the estimated coefficient (-biβ) had investors fully incorporated the partial correlation between SUE and the lagged SUEs. Contributions Compare to prior studies that document investors using a naïve seasonal random walk model, this paper shows that market does not act in accordance to the simple seasonal random walk model. The evidence suggest that investors are aware of both the existence and signs of the serial correlation in seasonally-differenced earnings. It is not clear from the regressions in Bernard and Thomas (1990) whether the market is totally unaware of the serial correlation in a seasonal random-walk model's prediction errors. This paper fill the gap in the literature by showing exactly how much the market underestimates the serial correlation. Method In Table 1, the author regressed SUE on the lagged SUEs. SUE is defined as the forecast error for the trend using a seasonal random walk model with trend, scaled by its estimation period standard deviation. The definitions are identical to those in Bernard and Thomas (1990). In Table 2, the author replicated the Bernard and Thomas's regression of CAR on SUEs. CAR is defined as the abnormal returns (adjusted by the return of NYSE-AMEX firms of the same size decile) in the (-2, 0) window relative to the earnings announcement date. In Table 3, the author regressed the CAR on SUEs but now controlling for current SUE.
Burgstahler (2014)
Research Question In this piece, Burgstahler recaps the evidence found in Jorgensen, Lee, Rock (2014) and suggests potential limitations and extensions to the paper. The argument of Jorgensen, Lee, Rock (2014) is that if discontinuities are due to scaling and selection, then there should be discontinuities in the distributions of both reported earnings and restated earnings. The fact that the discontinuity is prominent only in reported earnings suggest that earnings management is the key to understanding the discontinuity. Burgstahler reemphasizes this argument and conclude that the overall evidence is consistent with an earnings management story. Burgstahler points out three potential limitations of Jorgensen, Lee, Rock (2014) and suggest ways to extend the analysis. First, he believes that JLR should have considered the possible effects of changes over time. Since the discontinuity results in JLR uses a different time period than prior results, it uses a database with a different composition. The proportion of low price per share observations have greatly increased in correspondence with the proportion of small-magnitude EPS. The proportion of negative EPS also grew rapidly during the last few decades. Furthermore, in their comparison of restated vs reported EPS, JLR considered all observations even for the 65% of EPS where the restated and reported EPS are the same. Burgstahler suggest that JLR should only focus on the subset of observations that are different since the other observations are irrelevant and do not provide any information to distinguish between the two explanations.
Kothari, Ramanna, and Skinner (2010)
Research Question In this review paper, the authors provide a survey and economic analysis of the GAAP from a positive research perspective. They posit that the objective of GAAP is to facilitate efficient capital allocation in the economy, and as such, GAAP acts more as a system of stewardship and performance evaluation than as a vehicle for providing valuation information to investors. The stewardship and performance evaluation role of GAAP derives from the need to address agency conflicts arising from the separation of ownership and management. Income statements prepared under GAAP are intended to possess attributes such as conservatism, verifiable assets on balance sheet, and reliable measure of management performance that help keeps agency conflicts to a minimum. Valuation-relevant information in financial statement is simply a byproduct rather than the primary motivation of GAAP. The authors then discuss the implications of the economic theory of GAAP. They warned against expanding fair value accounting to areas such as intangibility, arguing that this creates too much leeway for management discretion and opportunistic behavior. Nevertheless, they acquiesce to the use of fair value accounting in circumstances where assets can be measured by observable prices in liquid secondary markets. They argue against the convergence between FASB and IASB and believe that competition between these two bodies would facilitate the most efficient capital allocation. Additionally, regulations should be flexible to allow accountants and auditors to determine the best practices. Contributions This study contributes to the literature on the properties and implications of GAAP. Complementing prior works such as Hothausen and Watts (2001), Ball (2001), and Watts (2003), this study finds that the primary focus of GAAP should be on performance control and stewardship. Assumption The authors starts out with the assumption that the objective for GAAP is the efficient allocation of capital and from there they derive the properties of GAAP that maximizes the assumed objective. The author assumes that complete contracting outside of GAAP is too costly to be feasible but there is no evidence to suggest that contracting within the GAAP framework is the best approach or the most cost-effective.
Skinner (1994)
Research Question In this study, Skinner is interested in examining whether managers use voluntary disclosure to preempt large negative earnings surprises. Managers' voluntary disclosure decisions are influenced by the asymmetric response of the market to bad news versus good news. Since the cost of negative earnings news is higher than other earnings news, due to both larger stock price reaction and higher litigation risk, managers may have an incentive to preempt the announcement of negative earnings surprises. This preemption lowers their litigation risk and minimizes any potential reputational costs to managers. Results Skinner finds results consistent with the use of disclosures to preempt negative earnings news. In his first test, he tries to see whether there is a relation between the forecast horizon and the sign of the news. He finds that bad news voluntary disclosures are more likely for quarterly earnings than for annual earnings. He also observes that good news is more often disclosed as point or range estimates while bad news disclosures are more qualitative in nature. Next, Skinner tests whether managers are more likely to preempt negative earnings surprise than other type of earnings news. He uses a seasonal-random-walk model to proxy for the market's expectation of quarterly earnings and assumes that managers have superior information about earnings. Skinner finds about bad news is more often preempted (25% of the bad news announcements) than good new (5.7% of good new announcements) and the overall sample (11%). Finally, he verifies that bad news disclosures generate a conditionally larger stock price reaction than good news disclosures though bad news is not preempted often. Methods Skinner chose a random selection of 93 firms from the NASDAQ National Market System. By searching the Dow Jones News for these firms, he was able to obtain 374 earnings-related disclosures, of which 109 relates to annual earnings, 171 relates to quarterly earnings, and the rest relates to both. He found that slightly more of the disclosures are disclosing bad news (251) than those disclosing good news (191). Contribution This study contributes to the literature on voluntary disclosures by showing that it could be used to preempt negative earnings surprise. While Skinner suggests that firms could preempt litigations by earlier and more frequent disclosures, other studies have reached the opposite conclusion. In theory, better quality disclosures, in either timing or frequency, could reduce the likelihood of litigation by decreasing the possibility of a large drop in stock prices. On the other hand, more disclosures carries with it the risk that it could be used by plaintiffs as evidence of firms intentionally manipulating the market.
Richardson, Sloan, Soliman, and Tuna (2005)
Research Question In this study, the authors develop a model that links accrual reliability to earnings persistence. Building on the work of Sloan (1996), the authors show how reliability in accruals can affect earnings persistence. By categorizing accruals into subcomponents with varying degree of reliability, they show that those components that are less reliable also result in lower earnings persistence. Investors do not appear to fully anticipate the lower earnings persistence, which leads to significant mispricing. Contributions This study complements Sloan (1996) by providing more detailed evidence that the differences in the persistence of accruals relative to cash flow results in different implications for valuation. The incremental contribution is that it shows mispricing is directly related to the reliability of the underlying accruals. It provides a comprehensive definition and categorization of accruals, which helps to improve our understanding of accruals from the narrow definition of working capital accruals to non- current accruals such as capitalized expenditures on plant and equipment. This study also highlight the trade-off between relevance and reliability in accrual accounting by showing that less reliable accruals introduce costs in the form of lower earnings persistence and the associated mispricing. Method This study use a more comprehensive definition of accruals than Sloan (1996) that takes into account non-current operating assets. Accruals are categorized broadly into the change in non- cash working capital, the change in net non-current operating assets, and the change in net financial assets. These are then further categorized into subcategories. For example, changes in working capital can be decomposed into the underlying and liability components. The asset components consists of account receivable and inventory, which are measured with low reliability. The liability component consists of accounts payable, which can be measured with a high degree of reliability. The change in net non-current operating assets are calculated as the change in non-current assets, net long-term non-equity investments, and less the change in non-current liablity. The underlying subcomponents are PP&E, intangibles (low reliability), long-term payables (high reliability), and postretirement benefits (low reliability). The change in net financial assets can be decomposed into short term investments, long-term investment, and financial liabilities. Short-term investments and financial liabilities have high reliability while long-term investments vary widely in its reliability.
Lara, Osma, and Penalva (2016)
Research Question In this study, the authors examine whether conservatism can improve investment efficiency. They predict and find a conditional negative (positive) association between conservatism and investment efficiency in settings prone to underinvestment (overinvestment). Conservatism can help firms prone to underinvestment to invest more and curb investment for firms prone to overinvestment. These effects are stronger when there is higher information asymmetries. Contributions This study contributes to the literature documenting the positive economic outcomes of conservatism. The evidence suggest that conservatism can result in positive outcomes for investors by helping firms invest more efficiently. It suggest that changing accounting regulations to mitigate conservatism may result in undesirable economic outcomes. Method The methods are similar to the method in Biddle et al. (2009) with the exception of replacing financial reporting quality with conservatism. The authors test for the relations between conservatism and the level of capital investment conditional on whether the firm is more likely to overinvest or underinvest. The model specification is listed below. Conservatism is proxied by conditional conservatism measure developed by Khan and Watts (2009), which estimates at the firm level the timeliness of earnings to good news and the incremental timeliness of earnings to bad news. The conservatism proxy is then the annual decile ranks of the three-year average of the total timeliness of loss recognition.
Ham, Lang, Seybert, and Wang (2017)
Research Question In this study, the authors used both an experimental and archival approach to study the effect of CEO narcissism on financial reporting quality. The financial reporting outcomes include earnings management, timely loss recognition, internal control quality, and financial restatements. The authors first validate in an experimental setting the link between signature size, narcissism, and misreporting. They discover a positive relation between signature size quartiles, NPI-40 narcissism score, and the magnitude of misreporting. Then an in archival setting, they study the relation between CFO signature size and financial reporting quality. They find evidence consistent with narcissistic CFOs practicing accrual and real earnings management. These CFOs are less conservatism and likely to recognize losses in a timely manners. They also associate with firms with weaker internal controls and larger instances of financial report restatement. Method The authors conduct five sets of tests. First, they test the relation between CFO narcissism and accruals quality, absolute discretionary accruals, and real activities manipulation using the model below. Five different measures are used to proxy for earnings management. The first proxy is AbsAccruals, which is calculated as the discretionary accruals from the modified Jones model. The second proxy is AccrualQuality, which is the abnormal change in working capital accruals from Dechow and Dichev (2002). The last three proxies are measures for real earnings management that was developed by Roychowdhury (2006). These variables include AbDisExp, abnormal discretionary expenses, AbCFO, abnormal cash flows from operations, and AbProdCost, abnormal production costs. Next, the authors test whether CFO narcissism predict timely loss recognition as measured by conditional accounting conservatism. They employ the Basu (1997) model as below. Then, the authors test whether CFO narcissism is associated with weaker internal control quality. Narcissistic CFOs may prefer to have weaker internal control quality so they are unhindered in their ability to control the firm. Finally, they test whether narcissistic CFOs are more likely to misreport and resulting in increased likelihood of accounting restatements. Contribution This study contributes to the literature on financial reporting by linking it with a specific CFO personality characteristic. The prior literature has long documented a link between narcissim and signature size (Zweigenhaft and Marlowe 1973, Jorgenson 1977). The authors are able to incorporate insight from the psychology literature into financial reporting and validate the link between signature size and financial misreporting. This study also enables a better understanding of the role of CFOs versus CEOs. By incorporating CEO characteristics into the analysis, the authors are able to distinguish between the role of CEOs versus CFOs. They show that the CFO has an influence on financial misreporting incremental to that of the CEO. Strength One unique feature of this paper is that it combines experimental and archival approaches to study the link between signature size, narcissism, and misreporting.
Gerakos and Gramacy (2013)
Research Question The goal of this paper is to evaluate and compare the myriad choices that can be made in generating regression based earnings forecasts. The authors examine both scaled and unscaled net income along the dimensions of variable selection, estimation model, estimation windows, and winsorization. Consistent with prior literature, they find that ordinary least squares regressions generally provide relatively accurate forecasts of both scaled and unscaled earnings. Larger predictor sets may not necessarily increase forecast accuracy. A small set of explanatory variables can actually result in more accurate forecast. At shorter forecasting horizons, simple methods such as random walk and AR(1) performs just as well as more sophisticated methods using larger predictor sets. For longer forecasting horizon, the predictors will become more useful. There can be divergence in the best methods depending on the forecasted variable and horizon. For unscaled net income, using longer estimation windows and not winsorizing leads to more accurate forecasts. In contrast, for scaled net income, using winsorization and shorter estimation windows leads to more accurate forecasts. Contributions This study is the most comprehensive up to date of examining the various choices made in the generation of regression-based forecasts. The regression-based approach has many advantages. Most importantly, it enables us to generate earnings forecasts for firms that have neither analyst coverage nor long-time series of earnings realizations. The authors also introduce a new aggregated forecast accuracy metric (separate for scaled and unscaled) that is based on mean-squared predictive error. This contrasts with the commonly used absolute error and bias. The authors argue that this is a superior choice because absolute error "does not respect the loss criteria used for estimation" and can lead to incoherent rankings of ordinary least-squares based regressions, while bias is "inappropriate when evaluating predictors like the lasso that deliberately leverage a bias variance tradeoff". Method The regression-based approach tries to estimate net income Y (scaled or unscaled) using a set of predictors X for a particular year and forecast horizon. G(X) is the estimator used. For the predictors, the authors tried prior net income (which captures autoregressive trends), positive and negative net income (mean reversion is faster when negative), dividends (an indicator variable for whether the firm payed dividends), net income (an indicator variables for negative net income and accruals), and additional variables from the balance sheet and income statement (current assets, accounts payable, cash and cash equivalents, cost of goods sold, short term debt, long term debt, inventory, current liabilities, total liabilities, receivables, sales, shareholder equity etc.). For the regression function, the authors tried OLS, PCR, PLSR, lasso, ridge regression, random forecasts and CART. There are a total of 144 different combinations of regression functions, predictor set, and winsorizations at each forecast horizon.
Healy (1985)
Research Question The goal of this paper is to examine the effect of manager's bonus schemes on their accrual and accounting policies. This study provides the first complete characterization of manager's accounting incentive effects. The author finds a strong association between manager's bonus incentives and their management of discretionary accruals. While prior studies report managers always choosing income-increasing accruals, in reality, the level of discretionary accruals is a function of the firm's earnings before discretionary accruals, the upper and lower bound of the bonus plan, the limits on discretionary accruals, and the manager's risk preferences. If earnings before discretionary accruals is less than the threshold needed to meet the bonus target, the managers are more likely to choose income-decreasing discretionary accruals. It is only when the discretionary accrual exceeds the lower threshold but do not exceed the upper bound of the bonus plan that managers are likely to exercise income-increasing discretionary accruals. Contributions This study contributes to the literature studying the effect of manager's bonus plans on their incentives for setting accounting policies (Watts 1997 and Watts and Zimmerman 1978). While prior literature find that bonus schemes create incentive for managers to manage earnings, they provides an incomplete picture of managers' bonus incentives. In contrast, this study provides a complete characterization of manager's accounting incentives. Method The sample for this study comes from the 250 largest firms listed on the 1980 Fortune Directory. The actual number of useable sample consists of 94 firms, after deleting those that do not have bonus contracts or lack the necessary information. The lower bound for the bonus contracts are usually a function of net worth or as a function of net worth plus long-term liabilities. The upper bound for the contracts is usually defined as a function of cash dividend. To examine manager's earnings management behavior, two proxies are used- total accruals (including discretionary and non-discretionary) and the effect of voluntary changes in accounting procedures on earnings. Voluntary accounting procedural changes is captured from two sources: the sample of depreciation switches used by Holthausen (1981) and Accounting Trends and Techniques. Firms are classified into three portfolios- UPP, LOW, and MID. The UPP portfolio consists of firms for which the bonus contract upper limit is binding. In other words, the cash flow from operations exceeds the upper bound defined for the bonus plan. In contrast, the LOW portfolio consists of firms for which the bonus plan lower bound is binding. These are firms whose earnings are less than the lower bound specified in the bonus plan. Lastly, the MID portfolio consists of observations that do not have neither an upper nor lower bound. Results The author looks at the proportion of firms with negative accruals and positive accruals in each of the portfolios. They find that there is a lower (and statistically significant) proportion of negative accruals in the LOW portfolio than in the MID portfolio. The results are consistent with managers choosing income-decreasing discretionary accruals when the lower bound of their bonus plan is binding. In addition, negative accrual firms are less prevalent in the LOW and UPP portfolio than in the MID portfolio. This implies that in situations where both the lower and upper limits are binding, managers are likely to select income-decreasing accruals. Dissecting accruals into its subcomponents (changes in inventory, changes in receivables, depreciation, changes in payables, and changes in income taxes payable), the author finds that changes in inventory and receivable accruals are most strongly associated with management compensation incentives. There is a higher incidence of negative inventory and receivable accruals in the LOW and UPP portfolio than in the MID portfolio. Lastly, the author examines whether a greater number of voluntary accounting procedural changes occur for firms that modify and adopt bonus plans versus firms with no bonus plan change for each of the years 1968 to 1980. They find that the average number of voluntary accounting procedural changes is greater for firms with bonus plan changes than for firms without bonus plan changes in nine of the twelve years.
Kothari, Leone, and Wasley (2005)
Research Question The main objective is this paper is to test whether a performance-matched discretionary accrual approach provides a better measure of discretionary accruals. This approach requires finding a set of control firms based on ROA, industry, and year and adjusting discretionary accruals of the treatment with respect to that of the control. The abnormal discretionary accruals removes the part of the accruals arising from performance and other incentives, leaving only the part of accruals relating to the event of interest. The authors find that performance-matched discretionary accrual measure is better specified across a wide variety of simulated event conditions. While in some setting it is still subject to some misspecification, it is the most reliable measure in terms of Type 1 error. Performance matching based on ROA outperforms the matching based previous year's ROA. Contributions The use of discretionary accruals in tests of market efficiency and earnings management is subject to jointly testing both the model of discretionary accruals and earnings management. Existing approach to estimating discretionary accruals is flawed and subject to various misspecifications. This paper presents an approach that can better measure discretionary accruals and enhance the reliability of inferences in studies using discretionary accruals. This study partly corroborates the intuition of prior literature such as Barber and Lyon (1996, 1997), which find that matching on ROA results in better specified tests of long-run abnormal stock return performance and abnormal operating performance than other matching variables. Method The authors conduct simulations for 250 samples of 100 firms each. The samples are drawn without replacement from the full sample or from stratified subsets formed from the highest and lowest quartiles of firms ranked on book-to-market, past sales growth, earnings-to-price, size, and operating cash flow. The ranking are done annually for each variable and pooled across all years to form a high quartile and a low quartile sample The performance-matched approach is compared with a regression-based approach that includes return on assets as a control. ROA is based on both the current year and the prior year. The performance-matching approach involves matching observations on the basis of two-digit SIC code, year, and ROAit (ROAit-1). The performance-matched discretionary accrual is the difference of the discretionary accruals between the treatment firms and the matched firms Discretionary accruals is estimated using both the Jones model and the modified Jones model. The Jones model is calculated as below using all firm-year observations in the same 2-digit SIC code. The residuals from regression as the discretionary accruals. The modified-Jones model is similar but subtract the change in accounts receivable from ∆Sales in the model. Unlike Dechow et al. (1995) which estimates the modified Jones model in a time- series setting, the model is estimated cross-sectionally here. ROA is added as a control to facilitate comparison with the performance-matched approach.
Stein (1989)
Research Question The main research this study attempt to address is whether managers myopically manage current earnings at the expense of long-term firm value. The authors put forth the argument that excessive capital market pressure may prompt managers to myopically inflate earnings, which adversely affect a firm's long-term performance. This is in directly contrast to prior works (Jensen 1986) that presents the positive disciplinary role of capital market pressure. Manager's myopic behavior can be understood as the Nash equilibrium outcome of a non-cooperative game between managers and the market. Managers will inflate earnings to fool the market even when in equilibrium the market is not fooled. Contributions Jensen (1986) espouse the view that since the market is efficient and cannot be systematically fooled by inflated earnings, managers who are concerned about stock prices will not manage earnings. This paper argues otherwise and suggest that managers who are concerned about stock will act myopically to inflate earnings. The model suggest that manager's myopic behavior arises from the invisibility of managerial actions. This is consistent Laffont and Tirole (1987), which posit that if investment if invisible, high investments and the costs accorded to such investments will be mistaken for low managerial effort. Method/Result In the absence of any myopic behavior by managers, a firm's earnings consists of a permanent component and a transitory component, as shown below. Since managers can inflate earnings by borrowing bt from future period, the observed earnings is shown below, where c(bt-1) is the earnings that is borrowed by the previous period. A key assumption is that the amount of borrowing is not directly observable and investment is invisible. In a steady state signal-jamming equilibrium, managers will inflate earnings by borrowing a constant amount from the next period's earnings. The market correctly anticipate the equilibrium borrowing value as 𝑏̅. The expectation of future earnings is shown below, where the lag coefficients αj sum to one and α0 measures the effect of current earnings on expectations. The manager will choose the level of b to maximize their utility, as shown below. Since the market assumes the level of borrowing to be 𝑏̅ regardless of what the managers choose, in equilibrium, the manager will choose a borrowing rate equal to the market's conjecture of 𝑏̅. The expression for the equilibrium level of 𝑏̅ is shown below.
Khotari and Warner (2006)
Research Question The paper gives an overview of the event study methodology and the econometrical issues in designing an event study. The authors argue that while event study methods have improved over the years, severe limitations remain especially for longer horizons. Inferences using these methods require extreme caution on the part of users. Event study methods Event study methodology for the most part still resembles the seminal works of Fama et al. (1969), albeit there have been some improvements in the measurement of abnormal returns (daily and intraday level instead of monthly level) and the methods used to calibrate their statistical significance. The typical model examines stock returns for a set of firms experiencing some type of event. The return that could be attributed to the event is the abnormal return, or the difference between the observed return and the predicted return. There are a variety of models for expected return such as the market model, constant expected returns model, capital asset pricing model, three-factor model, and three-factor with momentum model. For event studies examining the cross-sectional distribution of returns, the mean abnormal return (AR) is calculated. For time-series aggregations, cumulative abnormal return (CAR) or buy-and- hold methods are used. A test statistic is typically computed for the CAR and compared with its assumed distribution under the null hypothesis that the mean abnormal return is 0. The test statistic is highly sensitive to the variance of the mean abnormal returns used in the calculation, which can be calculated using the daily abnormal returns for a portfolio of event firms for a number of days around the event date (need to be calculated in the period before or after the event rather than during the event). The authors discuss the properties of different methods and shed light on their limitations. If the event period is associated with greater uncertainty and variability in the returns, then the use of pre or post-event data for calculating variance may understate the true variance of the event- period abnormal performance, which would overstate the statistical significance of the event- window abnormal performance. The authors suggest using the ratio of the variances during the event and non-event period to adjust for the degree of bias in the variability. Using simulation or analytical procedures, the authors show the differences in properties for short and long-horizon tests. Long-horizon methods are relatively more problematic than short- horizon methods and are highly sensitive to the specification. While short-horizon methods are relatively well-specified and have high power when the abnormal performance is concentrated in the event window, long-horizon methods tend to be poorly-specified and have low power to detect abnormal performance. Long-horizon methods are also highly sensitive to the expected return model. The event study tests are only correct if the assumed expected return model is correct. This would require assumptions such as normally distributed cross-sectional mean abnormal returns or independent abnormal returns in time-series/cross-section. Factors that could affect the power of event tests include the sample size and firm-characteristics. Larger sample tend to have higher power. Firm characteristics such as size are related to the amount of variance in its security returns. Higher variance means noisier returns and lower power of the test. As evidence, the authors find that firms in the lowest decile of average standard deviation require only 21 stocks to reject the null but firms in the highest decile require 60 stocks to reject the null.
Sloan (1996)
Research Question The paper introduces the accrual anomaly in which high and low accrual stocks are mispriced and a strategy of longing firms with low accruals and shorting firms with high accruals could generate a statistically significant excess return over time. Different components of earnings (cash flow vs accruals) have implications for the market. Earnings performance due to the accrual component are shown to exhibits lower persistence than earnings performance due to the cash flow component. This difference in persistence is not reflected in stock prices, implying that investors fixate on earnings and are unable to distinguish between the different components. As a result, firms with a relatively higher level of accruals tend to experience negative future abnormal returns that are concentrated around future earnings announcements, and vice versa for low accruals firms. The author finds that the high and low accrual stocks are mispriced and the differences in returns to high and low accrual firms are not explained by differences in risk as measured by the CAPM or firm size. The mispricing occurs when the market overestimates the persistence of accruals that has a tendency to reverse and underestimates the persistence of cash flows. Investors are thus positively surprised by the future earnings of low accrual firms and negatively surprised by the future earnings of high accrual firms. Contributions This study contributes to the literature on valuation and financial statement analysis by showing that the two components of earnings, cash flow and accruals, have different implications for future stock returns. Stock prices reflect investor's fixation with earnings and do not reflect differences between the components of earnings. The idea that one can trade on the different persistence of cash flows vs accruals sheds doubt on the theory of perfect market efficiency. The findings in this contradict Bernard and Stober (1989), which conjecture that the information content of the two components of earnings may not be systematically different. This study corroborates the results in Ou and Penman (1989) and Bernard and Thomas (1990) in a different setting. Unlike the earlier paper that use a random walk model to represent investors' naïve earnings expectations, this study apply a less restrictive naïve expectation model that assumes investors fixate on earnings. Rather than relying on a statistically motivated model, the author relies on characteristics of the underlying accounting process. Method Sample consists of 40679 firms from 1962 to 1991 with available data in Compustat and CRSP. Earnings is defined as income from continuing operating scaled by average total assets. Accruals is calculated following Dechow et al. 1995. The cash component of earnings is calculated as the difference between earnings and accruals. Accruals= (change in current asset- change in cash/cash equivalents)- (change in current liabilities- change in debt included in current liabilities- change in income tax payable) - depreciation and amortization expense Future stock returns are computed starting four months after the end of the fiscal year and adjusted for expected return using two alternative procedures. The first approach uses size- adjusted returns by measuring the buy-hold return in excess of the buy-hold return on a value weighted portfolio of firms with similar market values (by splitting NYSE and AMEX firms into deciles). The second approach involves estimating the Jensen alphas for each portfolio
Ball and Brown (1964)
Research Question The paper is a seminal work that paved the way for capital market research in accounting. The authors provided first-time evidence of the information content in earnings announcements using two earnings expectation models- a simple random walk model and a market model. They find evidence that information in earnings announcements is reflected in security returns. In particular, the sign of the abnormal stock return in the month of an earnings announcement is positively correlated with the direction of the earnings surprise. They also find that annual announcements are not a particularly timely source of information as the information is often preempted in quarterly announcements. In addition, earnings are shown to be more informative than cash flows, consistent with the accrual process making earnings more informative. Contributions This is one of the first attempt at empirical work. The authors bemoan the existing approach of relying on analytical models in evaluating accounting practices, which ignores how well such theories apply within a practical setting. An empirical approach, on the other hand, relies on real- world outcomes and its evidence is more relevant to real-world settings. The authors provide initial evidence of the usefulness of accounting income numbers. The authors contribute to the efficient market hypothesis, as they note that most of the information contained in the annual report is anticipated by the market during the 12 months before the report is released. Method The rate of return contains not only information that pertains to that firm but also the presence of market-wide information so the author first have to abstract from market effects. Then, to establish the effect of information from the earnings announcement event, the author segregated the expected and unexpected elements of income change. The amount of new information conveyed from earnings is calculated as the difference between actual change in income and its conditional expectation. The unexpected income change is derived as the residual of the regression of the change in income on the change in the average market income. The market income is based on Fisher's "Combination Investment Performance Index". The sample uses Compustat firms from 1957-1965 with available earnings and price data. Young firms and firms that do not report on December 31st are excluded.
Bushman, Lerman, and Zhang (2015)
Research Question The paper tests for the relationship between accruals and cash flows over the past 50 years and documents a drastic decline in the overall correlation between the two variables. Using both Dechow (1994) and Dechow and Dichev (2002), the authors finds a decline in the R2 from 70% in the 1960 to near 0 in more recent years. The cause of the attenuation appears to be mostly due to non-timing-related accrual recognition such as one-time and non-operating items and the frequency of firms reporting losses. Other factors such as temporal changes in the matching between revenue and expense, the growth of intangible-intensive industries played a limited role in the attenuation. Overall, the results suggest that the conceptual timing role of accrual accounting has lost much of its significance. Results Using the Dechow (1994) model, the authors find that the R2 drops from about 70% in the 1960s to near zero in recent years. The negative coefficient on contemporaneous cash flows experiences a drastic increase over the years. An increase of $1 in operating cash flows was associated with a decrease of approximately 70 cents in accruals in the 1960s, but the effect on accruals dropped to under 2 cents by 2014. Using changes, rather than levels, produces similar results (decrease in R2 from 90% to 10%, increase in the coefficient from -0.9 to -0.4) Using Dechow and Dichev (2002), the authors find a similar decline in the adjusted R2 from 70% in the 1960s to less than 10% in recent years. The coefficient on contemporaneous cash flow increases from -0.8 to -0.4, while the coefficients on past and future cash flows remain relatively unchanged (increase from 0.16 to 0.21 for past cash flows and from 0.04 to 0.20 on future cash flows). The weakening of the accruals and cash flow relationship could be attributed to a number of accounting developments: economic-based cash flow shocks, accrual estimation errors, fair value adjustments, one-time and non-operating items, timely loss recognition, net losses, and earnings management. Of these, increases in one-time and non-operating items and firms reporting losses are shown to explain 63% of the decline in the accruals and cash flow correlation. The rest of the factors such as economic-based cash flow shocks and timely loss recognition only play a minor role. Contributions This study contributes to the literature studying temporal changes of accruals/earnings over time (Dichev and Tang 2008). The negative association between contemporaneous accruals and cash flows has been well documented in the literature (Rayburn 1986, McNichols and Wilson 1988, Dechow 1994). This study documents the weakening of such relationship and is consistent with prior evidence documenting a decline in matching between revenues and expenses (Dichev and Tang 2008). This paper ties in with several studies such as Srivastava (2014) and Ball and Shivakumar (2006) by testing whether high intangibles or asymmetric timely recognition led to the decrease in the relevance of earnings. The results suggest that the residual accruals estimated using Dechow (1994) or Dechow and Dichev (2002) may be systematically biased in recent years (residuals systematically underestimated) Method The authors devise two methods to test the correlation between accruals and cash flows. The first method uses the Dechow (1994), which regresses total accruals on contemporaneous operating cash flows. The second method uses Dechow and Dichev (2002), which regresses total accruals on past, current, and future operating cash flows. Total accruals is estimated using the balance sheet approach before 1988 and the statement of cash flows approach after 1988. Pre-1987, it is defined as changes in noncash current assets less changes in non-debt current liabilities minus depreciation expense, scaled by average total asset. Post-1987, it is defined as earnings called by average total assets minus cash flows. Cash flow is calculated as earnings (before extraordinary items scaled by average total asset) minus accruals. The sample consists of 217164 firm-year observations from 1964 to 2014 and excludes financial firms and firm-years with significant acquisition activity.
Ball and Shivakumar (2008)
Research Question The primary objective of this paper is to provide a simple quantitative measure of the amount of new information in earnings announcement. Using the R2 from the regression of total calendar- year returns on each of the quarterly earnings announcements during the year, they determine that quarterly earnings announcements collectively account for approximately 6% to 9% of the total information incorporated in share prices over the year (each quarterly earnings announcements on average account for about 1% to 2% of the total annual information). Results For the full sample during the 1972-2006 period, the abnormal R2 averages 5.9% for arithmetic returns and 9.3% for logarithmic returns. (Table 2, 3) Each individual quarterly announcement, on average, is associated with approximately 1.5% to 2.3% of total annual information. Deleting extreme observations lowers the full-period average from 5.9% to 4.8% for arithmetic returns and from 9.3% to 6.0 for logarithmic returns. Firms with December fiscal year-end are also shown to have lower abnormal R2 than firms with non-December fiscal year end. In the more recent years, there has been a sharp increase in the proportion of annual information released in earnings event window. The authors attribute this to a combination of potential factors: increased financial reporting quality subsequent to Sarbanes-Oxley, reduction in analyst forecast activity, Regulation FD, product or factor market conditions, increased concurrent management forecasts, and changes in sample composition. The increased concurrent management forecast is shown to only partially explain the increased information during earnings announcement window in recent years (Table 6). Using a constant sample from 1996 to 2006, the authors find that the increase is not cause by a change in sample composition (Table 4). For the full sample, the authors also find that each of the four slope coefficients from the regression of annual returns on the quarterly event-window returns exceeds one. This is consistent with post-earnings announcement drift phenomenon as well as price momentum. The authors also considered information flow in the periods before and after the earnings announcement. They find that for a period of six weeks before and after the event window, the amount of information released is less than normal. Naturally, analysts are reluctant to produce new information before and after the forthcoming announcement. Finally, the authors also looked at the effect of firm size and market-to-book. In contrast to the evidence in Atiase (1985), the authors find that the relative informativeness of earnings announcements is a concave function of size and market-to-book ratio (Table 7). Contributions This study contributes to the literature on the information content of earnings by quantifying the importance of earnings announcements relative to the total information available in a year. The simple research design could be used to quantify the relative informativeness of other variables such as dividends and forecasts. The authors show that earnings announcements are not a major source of timely new information and suggest that it is more useful for debt contracting. While prior studies compare earnings-announcement event-window price behavior with price behavior during non-announcement days or weeks (Beaver 1968), this study places these short windows in a longer-term perspective by looking at the total price volatility over the earnings announcement fiscal period. The authors find evidence consistent with the post-earnings announcement drift (Bernanard and Thomas 1989)- the slopes of the regressions of annual returns on quarterly announcement returns exceed 1. Method The relative informativeness of earnings announcements are estimated by regressing the firms' calendar-year returns on their four earnings-announcement window returns. The annual return variability associated with the four earnings event windows is simply the R2 (measures the proportion of total information incorporated in share prices annually that is associated with earnings announcements). This design is desirable as it does not require the estimate of an earnings surprise variable nor the estimation of an earnings expectations model. Ri(annual)=a0+a1Ri(window1)+ a2Ri(window2)+ a3Ri(window3)+ a4Ri(window4)+ƹ The sample consists of all firms with exactly four earnings announcements in the calendar year and covers the period from January 1972 to December 2006. The event window is defined as (- 1,+1) days around earnings announcement. Returns are computed as buy-and-hold returns. The adjusted R2 is compared against a benchmark that reflects normal price volatility. The benchmark R2 is the expected R2 under the null hypothesis that daily returns are i.i.d. across time. The authors used 4.8% (12/252) as the expected value of the adjusted R2 under the null. To address the issue of serial correlations in daily returns, the author also used other benchmark R2s such as those obtained using four randomly chosen three-day windows.
Barber and Lyon (1996)
Research Question This paper evaluates the choices that researcher make in their design of the event study to detect abnormal operating performance- the choice of the accounting based performance measure, the statistical tests employed, and the model of expected operating performance. Results First, the researcher must decide on a choice of the operating performance measure. There are five commonly used measures - return on book value of assets, return on book value of assets adjusted for cash balances, return on sales, return on market value of assets, and a cash flow based measure of return on assets. The authors find that the choice of performance measure is for the most part inconsequential, though did they find cash flow measures to be less powerful than those based on other performance measures. The use of the performance measure depends on the context of the study. For example, cash flow may be a more accurate measure for firms motivated to inflate their reported earnings. This paper also evaluates the specification of test statistics used to detect abnormal operating research. The authors document that the nonparametric Wilcoxon tests are superior to the parametric t-statistics for all of the operating performance measures. Finally, the authors evaluated different models of expected performance and find that it is important to match sample firms with control firms with similar pre-event performance, industry, and size. If the treatment and control firms are different in their pre-event performance, the matching will yield test statistics that are misspecified. This is due to the tendency for performance measures to mean-revert over time (due to one-time effects of accounting changes, or temporary shifts in product demand). For a firm performing well before an event, it is more likely to mean- revert and cause the research to conclude that the subsequent poor performance is due to the event when in fact the accounting measure of performance is simply reverting back to its mean. They also find that test statistics using the change in a firm's operating performance relative to an industry benchmark yield more powerful test statistics than do those based on the level of a firm's operating performance. Method The authors test for four models of expected performance. The first four are matching based on firms in the same industry, size, and pre-event performance. The four options are- 1) same two digit SIC code, 2) four digit SIC code, 3) two digit SIC code and similar size, 4) four digit SIC code and similar pre-event performance. One problem of matching firms in the same industry is that it ignores the history of the firm relative to the benchmark. This can be addressed by comparing the firm's performance relative to the industry benchmark pre-event and looking at changes in the sample firms' performance relative to changes in the industry benchmark (expected performance equal to pre-event year plus changes in the industry's performance). This formulation results in four additional models of expected performance. A ninth model assumes the expected performance to be equal to the firm's own past performance. To match on firm size, the author experimented with several alternative size filters including those within 70%-130% of the treatment firm. To match by pre-event performance, the authors require the control firms to be within 90-110% of the treatment firm. If no match is found, then the firm with the closest size or performance measure is selected. To estimate the explanatory power of the nine models, the authors estimated cross-sectional regressions of ROA on the level of industry benchmark (for models 1-4) or the firm's lagged performance and the change in the industry benchmark (for models 5-9).
Barber and Lyon (1997)
Research Question This paper evaluates the choices that researcher make in their design of the event study to detect abnormal operating performance- the choice of the accounting based performance measure, the statistical tests employed, and the model of expected operating performance. Results First, the researcher must decide on a choice of the operating performance measure. There are five commonly used measures - return on book value of assets, return on book value of assets adjusted for cash balances, return on sales, return on market value of assets, and a cash flow based measure of return on assets. The authors find that the choice of performance measure is for the most part inconsequential, though did they find cash flow measures to be less powerful than those based on other performance measures. The use of the performance measure depends on the context of the study. For example, cash flow may be a more accurate measure for firms motivated to inflate their reported earnings. This paper also evaluates the specification of test statistics used to detect abnormal operating research. The authors document that the nonparametric Wilcoxon tests are superior to the parametric t-statistics for all of the operating performance measures. Finally, the authors evaluated different models of expected performance and find that it is important to match sample firms with control firms with similar pre-event performance, industry, and size. If the treatment and control firms are different in their pre-event performance, the matching will yield test statistics that are misspecified. This is due to the tendency for performance measures to mean-revert over time (due to one-time effects of accounting changes, or temporary shifts in product demand). For a firm performing well before an event, it is more likely to mean-revert and cause the research to conclude that the subsequent poor performance is due to the event when in fact the accounting measure of performance is simply reverting back to its mean. They also find that test statistics using the change in a firm's operating performance relative to an industry benchmark yield more powerful test statistics than do those based on the level of a firm's operating performance. Method The authors test for four models of expected performance. The first four are matching based on firms in the same industry, size, and pre-event performance. The four options are- 1) same two digit SIC code, 2) four digit SIC code, 3) two digit SIC code and similar size, 4) four digit SIC code and similar pre-event performance. One problem of matching firms in the same industry is that it ignores the history of the firm relative to the benchmark. This can be addressed by comparing the firm's performance relative to the industry benchmark pre-event and looking at changes in the sample firms' performance relative to changes in the industry benchmark (expected performance equal to pre-event year plus changes in the industry's performance). This formulation results in four additional models of expected performance. A ninth model assumes the expected performance to be equal to the firm's own past performance. To match on firm size, the author experimented with several alternative size filters including those within 70%-130% of the treatment firm. To match by pre-event performance, the authors require the control firms to be within 90-110% of the treatment firm. If no match is found, then the firm with the closest size or performance measure is selected. To estimate the explanatory power of the nine models, the authors estimated cross-sectional regressions of ROA on the level of industry benchmark (for models 1-4) or the firm's lagged performance and the change in the industry benchmark (for models 5-9).
Bergstresser and Philippon (2006)
Research Question This paper examines how CEO's equity based incentives can affect their earnings management behavior. More incentivized CEOs, as measured by the sensitivity of their compensation to share prices, are more likely to lead companies with higher levels of earnings management. Periods of high accruals is also shown to coincide with unusually significant option exercises and insider selling by CEOs and other top executives. Contributions This study contributes to the literature on accrual-based earning management. Building on works such Beneish and Vargus (2002), which shows the link between accruals, earnings management, and insider trading, this study shows that periods of high accruals are associated with high levels of CEO option exercises and insider share sales. This study build on the accrual anomaly literature as first documented by Sloan (1996) by tying it with CEO's incentive to manipulate earnings. CEOs and other executively are likely to exercise unusually large amounts of options and sell shares during years of high accruals. Whereas earlier studies in CEO compensations have suggested aligning CEO incentives through equity and option packages, this study suggest a darker side to the use of equity based incentives. It finds that more incentivized CEOs, those whose overall compensation are more sensitive to share prices, are more likely to manage earnings. Method To compute the accrual measures, the authors follow Dechow et al. (1995) and calculate total accruals as the difference between earnings and cash flows from operations. The components of accruals that are nondiscretionary are then removed from total accruals using either the Jones model or the modified Jones model. The authors also tried an alternative approach to calculating accruals using the statement of cash flows. As shown below, CEO's equity-based incentives is measured by the dollar change in the value of their stock and options holdings from a one-percentage point increase in the company stock price. An incentive ratio is then calculated as the proportion of the CEO's compensation that would come from a one-percentage point increase in the equity of the company. The authors also test whether CEO option exercise and selling activities are particularly pronounced during periods of high accruals. They create four measures of CEO's selling activities- the gross and net sales of shares by the CEO normalized by the number of shares outstanding; and gross and net sales of shares normalized by outstanding shares for top five executives
Easley and O'Hara (2004)
Research Question This paper examines the role of information in affecting the firms' cost of capital, focusing on the specific roles played by private versus information. Existing asset pricing models do not include the role of information, even though information is fundamental to market efficiency. This paper argues that private and public information have different implications for informed vs uninformed investors. While informed investors could mitigate risk of private information by shifting their portfolio weights to incorporate the information, uninformed investors are unable to do so and faces greater risk from private information (uninformed investors tend to hold more risky stocks than informed investors, which could not be diversified away). Thus, uninformed investors will demand a higher return to hold stocks with greater private (less public) information. The authors develop a multi-asset rational expectations equilibrium model that includes private and public information, informed and uninformed investors. They find that there exists a rational expectations equilibrium in which prices are partially revealing, and informed and uninformed investors have differing expectations. The composition of information between public and private information matters for the cost of capital, with investors demanding a risk premium to hold stocks with greater private information.
Bertrand and Schoar (2003)
Research Question This paper examines whether managers have an incremental effect on firm policies. In other words, do managers play any role in firm behavior and performance? Prior research in this area have generally relied on firm, industry, or characteristics to explain performance while discounting the role of the individual managers. This paper is able to parse out the manager fixed effects in firm behavior. The results suggest that managers play an important role in making the decisions that affect firm outputs. Adding the manager fixed effects to into existing models results in significant improvement in the adjusted R2 of these models. For example, adding manager fixed effects to models of capital expenditures controlling for other fixed effects increases the adjusted R2 by 3% (including just CE0 fixed effects) and 5% (including all manager fixed effects). Adding manager fixed effects to models of corporate performance increases the adjusted R2 by more than 5%. Different managers are found to have different effects on outcome variables. CFOs are found to matter more for financial decisions (such as interest coverage) while CEO are more important for dividend policy and organizational strategy. Contribution This study contributes to the literature on the effect of managers on firm policies. Prior studies using standard models that rely on firm and industry level characteristics are unable to fully explain cross-sectional differences in firm's capital structure (Titman and Wessels 1998, Smith and Watts 1992). This also shows that the explanatory power of these models can be improved by incorporating manager fixed effects. Method The data for the study comes from Forbes 800 files (1969-1999) and Execucomp (1992 to 1999). The authors created a manager-firm matched panel data set, which can tracks the tenure of these managers at various firms over time. They kept only the subset of firms with the same manager present in at least one other firm. These managers also required to be at each firm for at least three years. These requirements reduced the final sample to about 600 firms and 500 managers. The authors are then able to estimate the explanatory power of the manager fixed effects on various firm practices, controlling for firm fixed effects and time fixed effects. They looked at the adjusted R2 of the various models after including manager fixed effects to see whether it improved the explanatory power of these models. Four different sets of firm-level outcome variables are examined. The first set of variables relate to the literature on the difference in investment to cash flow and investment to Q sensitivities (Fazzari, Hubbard, and Petersen 1988, Kaplan and Zingales 1997). These variables include capital expenditures, investment to Q sensitivity, investment to cash flow sensitivity, and acquisition policy. The second set of variables relate to financial policy and include financial leverage, interest coverage, cash holdings, and dividend payout. The third set of variables include the firm's organization strategies as measured by their R&D spending, advertising expense, diversification policy, and cost-cutting policy. The fourth set of variables relates to firm performance.
Kothari (2001)
Research Question This paper gives an overview of the capital market research in accounting from the early 1960s until the 1990s with an emphasis on the research, the theories, and the methodologies. Complementing previous review papers such as Olson (1982) and Bernard (1989), this paper focuses on several notable topics: earnings response coefficients and properties of analysts' forecasts, evaluation of alternative accounting performance measures, fundamental analysis and valuation, tests of capital market efficiency, value relevance of disclosures. Kothari first begin by talking about the demand and supply for capital market research. Next, he gives an overview of the development of accounting research in the early days of Ball and Brown (1968) and Beaver (1968) and how they evolved over time. In the final section, Kothari covers works in the more recent decades. According to Kothari, there are at least four sources of demand for capital market research in accounting: fundamental analysis and valuation, tests of market efficiency, the role of accounting in contracts and in the political process, and disclosure regulations. The research in fundamental analysis includes valuation models, empirical application of these models, and using fundamental analysis to forecast earnings and future stock returns. Tests of market efficiency typically includes short and long-horizon event studies, as well as cross-sectional tests of return predictability or the anomalies literature. Positive accounting theory facilitates research in the role of accounting in contracts and in the political process. Most of the studies are stock-price- based tests of the positive accounting theory. Research in disclosure regulations are concerned with the effects of regulations on the capital market. Kothari discusses the seminal works in capital market research in accounting such as Ball and Brown (1968) and Beaver (1968). By showing that accounting numbers convey information that is reflected in stock prices, these works paved the way for capital market research in accounting. Concurrent with these developments includes the use of positive theory in accounting research, the efficient market hypothesis, the capital asset price model, and the event study and association approach of Fama et al (1969). In the rest of the paper, the Kothari focuses on capital market research in the 1980s and 1990s. The topics covered include 1) methodological capital market research; 2) evaluation of alternative accounting performance measures; 3) valuation and fundamental analysis; 4) tests of market efficiency; 5) value relevance of disclosures. Methodological capital market research are concerned with issues such as the earnings response coefficient, properties of time-series, management, and analysts' forecasts of earnings and earnings growth rate, statistical inference, and models of discretionary/nondiscretionary accruals. Kothari gives a review of each of these subtopics and offers suggestions for future research.
Basu (2007)
Research Question This paper introduces the Basu model of conservatism, which captures the differential sensitivity of contemporaneous earnings to negative news versus positive news. Basu defines conservatism as accounts' tendency to require a higher degree of verification for positive news than negative news, which would imply differences between bad news and good news on the timeliness and persistence of earnings. Contribution The literature on conservatism have long predicted that negative earnings changes are more likely to reverse than positive earnings changes (Watt 1993). This study confirms this prediction and sheds light on the timeliness and persistence properties of conservatism. The Basu model introduced in this study is widely used in the literature in conservatism and is seen as a benchmark model. Hypothesis Hypothesis 1: The slope coefficient and R2 from a regression of annual earnings on annual unexpected returns are higher for negative unexpected returns than for positive unexpected returns. Hypothesis 2: The increase in the timeliness of earnings over cash flow is greater for negative unexpected returns than positive unexpected returns. Hypothesis 3: Negative earnings changes have a greater tendency to reverse in the following period than positive earnings changes. Hypothesis 4: In a regression of announcement period abnormal returns on earnings changes, the slope on positive earnings changes is higher than on negative earnings changes. Method/Results To test hypothesis 1, Basu regresses annual earnings on current annual returns. He uses negative and positive unexpected annual returns to proxy for bad news and good news. The slope coefficient and the R2 of the regression for negative annual returns is higher than that for positive annual returns, consistent with earnings being more timely in reflecting negative news than positive news. Similarly, for hypothesis 2, Basu regresses various measures of cash flow and earnings on annual returns. He includes dummy variables for negative returns to compare differences in their timeliness to bad news. He uses three different variables- XE is the earnings before extraordinary items, CFO is the cash flow from operations, and CFOI is the cash flow from operating and investing activities. For negative returns, the slope and the R2 increases monotonically, consistent with accruals making earnings more timely in reporting bad news but not good news. To test hypothesis 3, Basu regresses earnings changes, deflated by beginning-of-period price, on lagged deflated earnings changes for samples of positive and negative earnings changes. The slope coefficient on lagged earnings changes is insignificantly different from zero for positive earnings change while significantly negative for negative earnings change. The results are consistent with positive earnings changes being permanent and non-reversing, and negative earnings changes being transitory. Finally, for hypothesis 4, Basu regresses the abnormal return during the earnings announcement months to changes in earnings for positive vs negative earnings change. He finds a greater slope coefficient for positive earnings changes than negative earnings changes, consistent with the market recognizing good earnings news to be more persistent than bad earnings news.
Beaver (1968)
Research Question This paper is one of the first to study the information content of earnings by looking at investor's reaction to earnings announcements. Rather than specifying an expectations model of how investors relate reported earnings to market prices, this study takes the approach of examining the variability of stock returns and trading volume around earnings announcements. Results The author finds a dramatic increase in volume during the announcement week (mean volume is 33% large than during the non-announcement weeks). In addition, they find below normal volumes eight weeks prior to the announcement but above normal volumes the four weeks after the announcement, consistent with the intuition that investors postponed their trading of securities until the earnings announcement. The author checked to make sure that the abnormally high volume during earnings announcement was not induced by other market-wide events. The authors also conducted a price analysis to find that earnings announcements led to greater changes in the price of the security than during the non-announcement period. Using residual from the Sharp model, the author first separate out the portion of the stock's price change that cannot be accounted for by market-wide events. The residual is then squared to un-sign the price change and divided by the variance of the residuals during the non-announcement period. Comparing this ratio for announcement period and non-announcement period find that the price change was 67% higher during the week of the announcement. Contributions This paper is one of the pioneering papers in capital market research and paved the way for a series of papers in this field. This seminal paper is one of the first to show that financial information is relevant and that earnings announcements convey important information for investors. By looking at the contemporaneous association between earnings and returns, this is also a seminal work of association-type event studies. This paper supports the positive approach to research rather than normative. The concern is whether investors react to earnings rather than whether they should. Unlike Ball and Brown (1968), which uses a return based index (abnormal performance index) for measuring the usefulness of earnings, this paper uses trading volume and price change. While Ball and Brown (1968) infer that prices already reflect much of the earnings announcement information, this paper places more emphasis on the economic significance of earnings announcements. Method The sample uses annual earnings announcements of 143 firms during the years 1961 to 1965. The author requires the sample to meet the following requirements: 1) Compustat firm; 2) a member of the NYSE; 3) fiscal year does not end on December 21st; 4) no dividends announcement around earnings announcements; 5) no stock splits around earnings announcement; 6) less than 20 news announcements per year in WSJ. For each earnings announcement, the author calculated the average weekly trading volume (of the daily percentage of shares traded) for each of the 17-week window surrounding the announcement week. The normal trading volume is calculated using the average daily trading volume outside this 17-week window.
Watt (2003b)
Research Question This paper is part two of Watts' series on conservatism in accounting. Watts summarizes the empirical evidence on accounting conservatism and discusses whether the evidence is consistent with alternative explanations for conservatism. He also examines potential avenues of research in this area. Measuring Conservatism Research in conservatism have employed a variety of measures- net asset measures, earnings and accrual measures, earnings/stock returns relations measures. 1. Net asset measures 1) One can estimate the understatements of net assets under conservatism by looking at the ratio of the firm's book value of net assets to its equity value. Beaver and Ryan (2000) find lower net assets and book to market ratio for conservative firms. 2. Earnings and accrual measures 1) Since increases in asset values will not not recognized until they are realized, gains are going to be more persistent than losses. Positive earnings and earnings increases are likely to be more persistent while negative earnings are likely to be less persistent under conservatism. a. This is supported by Basu (1997) and Watts (1993) which found positive earnings changes to be persistent and negative earnings changes to be more transitory but not fully transitory. 2) Since losses tend to be fully accrued while gains do not, accruals tend to be negative and cumulative accruals tend to be understated a. Givoly and Hayn (2000) find that conservatism reduces cumulative accruals. For a firm in a steady state with no growth, accruals converge to zero. 3. Earnings/stock returns relation measures 1) Basu (1997) also looks at the relations between earnings and stock returns. Since conservatism recognize losses on a more timely basis than gains, losses should have be more contemporaneous with stock returns than gains. Basu finds that the coefficient and R2 from a regression of earnings on stock returns of the same year is higher for a sample of negative stock returns than positive stock returns. Evidence on conservatism explanations 1. Time-series evidence a. The evidence suggests that all of the four explanations (contracting, litigation, taxation, and regulations) played a role in conservatism. i. Basu (1997) looked at conservatism in the US in four historical periods and find evidence consistent with litigation generating conservatism. ii. Ahmed (2001) finds that conservatism increases for firms with more severe dividend payouts conflicts. b. Earnings management and abandonment options cannot individually explain conservatism.
Burgstahler and Dichev (1997)
Research Question This paper presents evidence of managerial manipulation of earnings to avoid decreases and losses. Using the cross-sectional distributions of earnings changes, the authors show that the frequency of small earnings decreases and losses right before zero is abnormally low while the frequency of positive earnings right above zero is abnormally high. The author estimates that around 8-12% of firms with small earnings decreases and 30-44% of firms with small earnings losses manipulate earnings. Results The extent of earnings management is dependent on the ex-ante cost of earnings management. The authors find that firms with high levels of current assets or current liabilities are more likely to manage earnings from a negative to a positive level (a downward shift in the conditional distribution of current liability and current assets to the left of zero, and an upward shift in the conditional distribution to the right of zero). Regarding the method of earnings management, the authors find that both components of earnings, cash flow from operations and changes in working capital are used to manage earnings. The upper and median quartiles of the conditional distributions of cash flows from operations show an upward shift between the left and right of zero. The same can be said of accruals except for the lower quartile of the distribution which shifts downward going from negative to positive. This inconsistency can be explained by the strong negative correlation between cash flow from operations and changes in working capital. The author presents two theories to explain the earnings management behavior. One is a transaction cost based explanation, which posit that managers will manage earnings higher to obtain lower costs in transaction with stakeholders. Since stakeholders are likely to use heuristic cutoffs at zero changes in earnings or zero earnings to determine the terms of transaction with the firm, managers will want to avoid reporting earnings decreases and losses by managing earnings. The second explanation is based on prospect theory, which postulates that decision-makers derive value from gains and losses with respect to a reference point, in this case the zero change in earnings, rather than from absolute changes in earnings. Thus, for the same increase in earnings, the corresponding increase in value is greater moving from loss to gain around the reference point. Contribution This study builds on the literature on earnings management, focusing in particular on management avoidance of earnings decreases. Prior works such as Hayn (1995) has shown a discontinuity around zero, where there are greater concentration of earnings above zero and fewer below zero. This study furthers the literature by showing why and how firms avoid reporting earnings decreases and losses. Method The main method is this study is the use of cross-sectional empirical distributions of scaled earnings changes and the level of earnings. Statistical tests are also performed to test whether earnings are managed to avoid decreases and losses. The null hypothesis being that density of the distribution of earnings changes should be smooth around zero.
Graham, Harvey, Rajgopal (2005)
Research Question This paper provides a survey of managers to better understand the factors that drive their corporate financial reporting decisions. The main takeaway is that managers place a great deal of importance to earnings (as oppose to cash flow) and they are willing to sacrifice long-term firm value to smooth earnings over the short term. They are also more likely to use real-earnings management as oppose to accounting based earnings management. Key Points CFOs believe earnings, and not cash flow, to be the most important measure, which is interesting because the finance literature has focused mostly on cash flows. Distressed firms, younger firms, firms with less earnings guidance are found to place more emphasis on cash flows. Earnings are benchmarked using quarterly earnings for the same period last year and analyst consensus estimate, of which the former is deemed the more important benchmark and the latter becomes more important as analyst coverage increases. CFOs prefer to maintain stability and predictability in earnings but to do so they are faced with the choice of trading off long-term firm value for temporary smoothing purposes. CFOs admitted to reduce spending on R&D and other discretionary expense and putting off positive NPV projects for the purpose of smooth earnings. Interestingly, they are more willing to employ real earnings management than accounting based earnings management. CFOs rationalize that sacrificing economic value for stability in earnings are the less evil when compared to the market reaction to missed earnings. Market expects firms to smooth earnings to a certain degree, and missing a target implies that the firm cannot find the money to hit the target, which is the sign of a bigger problem. CFOs claim to be motivated by concerns about adverse stock price reaction, their reputation and career concerns while dismissing aspects of their compensation incentives. Interestingly, they are not overly concerned about bond convenant violation. CFOs prefer stable earnings because volatility in earning command a risk premium in the market. This partly explains why they issue disclosures and guidance in the hope that reliable and precise information can reduce the firm's information risk. CFOs provide to investors,
Graham, Harvey, and Rajgopal (2005)
Research Question This paper provides a survey of managers to better understand the factors that drive their corporate financial reporting decisions. The main takeaway is that managers place a great deal of importance to earnings (as oppose to cash flow) and they are willing to sacrifice long-term firm value to smooth earnings over the short term. They are also more likely to use real-earnings management as oppose to accounting based earnings management. Key Points CFOs believe earnings, and not cash flow, to be the most important measure, which is interesting because the finance literature has focused mostly on cash flows. Distressed firms, younger firms, firms with less earnings guidance are found to place more emphasis on cash flows. Earnings are benchmarked using quarterly earnings for the same period last year and analyst consensus estimate, of which the former is deemed the more important benchmark and the latter becomes more important as analyst coverage increases. CFOs prefer to maintain stability and predictability in earnings but to do so they are faced with the choice of trading off long-term firm value for temporary smoothing purposes. CFOs admitted to reduce spending on R&D and other discretionary expense and putting off positive NPV projects for the purpose of smooth earnings. Interestingly, they are more willing to employ real earnings management than accounting based earnings management. CFOs rationalize that sacrificing economic value for stability in earnings are the less evil when compared to the market reaction to missed earnings. Market expects firms to smooth earnings to a certain degree, and missing a target implies that the firm cannot find the money to hit the target, which is the sign of a bigger problem. CFOs claim to be motivated by concerns about adverse stock price reaction, their reputation and career concerns while dismissing aspects of their compensation incentives. Interestingly, they are not overly concerned about bond covenant violation. CFOs prefer stable earnings because volatility in earning command a risk premium in the market. This partly explains why they issue disclosures and guidance in the hope that reliable and precise information can reduce the firm's information risk. CFOs provide to investors, Comments It is not clear whether managers are motivated by their compensation incentives in exercising accounting discretions or whether they are altruistically working in the best interest of the firm. Obviously, they will claim that they are motivated by adverse stock price movements or the firm's reputation with stakeholders, but the facts remain that their compensation is tied to firm's earnings. It would be interesting to know how the firm sets their "internal" earnings target for determining compensation and how it really compares with the external consensus target.
Kothari, So, and Verdi (2016)
Research Question This paper reviews the literature on sell-side analysts' forecasts and their implications for asset pricing. To understand how analysts influence market prices, the authors begin with an examination of the supply and demand forces shaping the properties of analysts' outputs. They shed light on how analysts produce their information and their incentives in conveying the information accurately and without bias. Then, the authors examine how analyst forecasts could influence asset prices by affecting the two components of security returns- the cash flow and the discount rate. Overview of each topic 1. Properties of analysts' forecasts The influence of analysts' forecasts on asset prices depends upon the properties of analysts' forecasts, which is shaped by the nature of the information and analysts' incentives. The two properties that have received attention in the literature include forecast accuracy and forecast bias, both of which are function of the complexity of the task, the skill level of the analysts, and the incentives facing the analysts. Generally, the more complicated the task and the less skilled and motivated the analyst, the lower the accuracy. Analyst incentives such as the need to establish reputational credibility, obtain higher compensations, secure investment banking business, and generate trading revenues can affect both accuracy and bias. a) Forecast accuracy Forecast accuracy refers to the absolute difference between the analysts' forecasts and the actual earnings. Forecast accuracy decreases with measures of uncertainty such as firm complexity, volatility in earnings (returns), and increases with analyst experience, skill, and available resources (employer/broker size). (Kross, Ro, and Schroeder 1990, Clement 1990). It also declines the longer the forecast horizon and the greater number of firms and industries followed by the analyst (Clement 1999, Brown & Mohd 2003). In addition, studies have looked at whether more accurate forecasts is associated with higher compensation. The evidence suggest that higher compensation does not influence forecast accuracy and that brokers do not use accuracy as the primary benchmarks for determining the analysts' compensation (due to the free rider problem and noise in the measure). Instead, higher accuracy is found to affect the analysts' career prospects and reputation (more likely to be granted all-star, lower probability of dismissal). b) Forecast bias Bias refers to the signed difference between analysts' forecasts and actual earnings and it could vary in the cross-section and over forecast horizon. The literature documents various sources of bias in analysts' forecasts. These include analysts' incentive to ingratiate themselves with management, promote the investment bank's underwriting business, or generate trading revenue for the broker. Analysts may optimistically bias their forecasts in order to gain access to private information (Chen and Matsumoto 2006) or to maintain/expand the firm's underwriting business. Management have the same incentives in cultivating personal relationship with analysts to deter them from forecasting negative news (Clement 2008). While analysts could be compromised by closer ties to management, they could also gain access to private information, which could allow them to make more accurate forecasts. For example, Cohen, Frazzini and Malloy (2010) show that analysts' with shared background with management (education) will make less biased forecasts and more profitable investment recommendations. Analysts' incentives to bias their forecasts optimistically are offset by the need to maintain their reputation. Many studies document that analysts are sufficiently concerned about their reputation to be motivated to refrain from issuing biased and inaccurate forecasts. Management may not necessarily desire optimistically forecasts either. In fact, they may prefer analysts' to bias their forecasts downward so as to allow them to beat their earnings target. c) Role of regulation Regulations can also play a role in influencing analysts' behavior. In particular, Reg FD and the Global Settlement have been shown to reduce analysts' bias by mitigating potential conflict of interest. Studies have shown that following Reg FD, analysts' forecasts have become less precise, and are accompanied by higher dispersion of the forecasts and lower analyst coverage. This is consistent with analysts reducing their private communications with management. Global Settlement is a legislation separating the investment banking branch and the research branch within a broker and is designed to mitigate the potential conflict of interest that analysts face. Kadan et al. (2009) and Barber et al. (2006) show that analysts have become more pessimistic after the regulation. 2. Analysts' forecasts and cash flow news Analysts' earnings forecasts can influence asset prices through its affect on the market's expectations of cash flows. Studies have examined whether analysts' forecasts could be used as a proxy for the market's expectation of future earnings (Fried and Givoly 1982, Brown et al. 1987). Since Fried and Givoly (1992), it has become standard practice in industry to use analysts' forecasts as a proxy for market expectations. a) Information content of analysts' earnings forecast revisions Given that analysts' forecasts can be a proxy for market's expectations about future cash flows, subsequent studies then try to see whether analyst forecast revisions provide incremental information to the market, as reflected in stock prices. They find that, indeed, analyst forecast revisions are associated with changes in stock price, trading activity, and liquidity. b) Do investors fully react to analysts' forecast revisions? The market reaction to earnings forecast revisions, is however, incomplete around the time of the revision. The market underreacts and is followed by a predictable drift subsequent to the forecast. This suggest that the market is informationally inefficient, as the initial price reactions to analysts' forecast revisions can be used to predict subsequent returns. The market underreaction could stem from two potential sources- market friction and investors' processing biases. Market frictions can slow the diffusion of information and result in delays in the price reaction to the forecast revisions. Investors' information processing biases refer to investors' sensitivity to a specific attributes of the analysts' forecast revision such as analyst reputation. Gleason and Lee (2003) find that the post-revision drift decreases with analyst reputation and the number of analyst following, but increases with revision quantity. Investor also underreact to less well-known analysts and analysts from smaller brokers. c) Do investors unravel predictable biases in analysts' forecasts? The evidence in the literature suggests that investors partially unravel the biases in analysts' forecasts. The level of unravelling depends on investor characteristics and their level of sophistication. Sophisticated institutional investors tend to have a much better understanding of the factors that drive forecast accuracy than unsophisticated retail investors. Small investors, for the part, naively fixate on analysts' forecasts and is the reason behind anomalies such as the post-earnings announcement drift.
Ball, Jayaraman, and Shivakumar (2012)
Research Question This paper tests the complementarity of audited financial reporting and voluntary disclosure. The confirmation hypothesis posit that audited financial reporting and voluntary disclosure must be complementary since commitment to higher reporting quality requires independent verification of such information as it is costly for investors to verify directly. Thus, firms that devote more resource to higher quality management forecasts are also more likely to commit more resources (in the form of higher audit fees) to ensure the credibility of the information. The authors report evidence consistent with the confirmation hypothesis. In particular, they find that forecasting and audit activities are positively correlated. A one standard deviation increase in forecast frequency, accuracy, specificity, and horizon are associated with 4.6%, 2.5%, 8.4%, and 4.9% increase in excess audit fees, respectively. In addition, the market reaction to management forecast disclosures are an increasing function of the amount of excess audit fees. Contribution This paper contributes to the confirmation hypothesis by showing that resource allocation decisions for voluntary disclosure and independent audit are made jointly. It suggest that one mechanism for firms to increase the credibility of their disclosure is through committing to independent audit. Method To examine the properties of voluntary forecasts, the authors focus on the frequency, timeliness, specificity, and accuracy of management forecasts. Frequency refers to the number of annual and quarterly EPS forecasts made during the year. Specificity refers to the precision of the forecasts. It is coded as 4, 3, 2, and 1 for point, range, open-ended and qualitative estimates, respectively. Timeliness (horizon) is measured by one plus the number of days between the fiscal period and the forecast date. The accuracy of the forecast is calculated as negative one times the absolute value of the difference between the management forecast (for point and range forecasts) and actual earnings scaled by the absolute value of actual earnings. To proxy for the extent of financial statement verification, the authors use the amount of excess audit fees paid by the firm. The stock return volatility is measured as the CAR for the 3 day window around management forecasts relative to the forecast date, standardized by the standard deviation of the excess returns in the non-announcement period (-45 to -10 days). The abnormal volume is calculated as the average log turnover in the three-day window minus the average log turnover in the non-announcement period, standardized by the standard deviation of the log turnover in the non-announcement period. To test the confirmation hypothesis, the authors estimates the following regression: Various variables are used to control for audit complexity including ACCR (total accruals scaled by total assets), CURRENT (ratio of current assets to total assets), FOREIGN (the ratio of foreign segment sales to total sales), and SEG (the number of business segments). A few other variables are used to control for audit risk- ROA, LIAB, LOSS, AND LAG. The voluntary disclosure variables (frequency, timeliness, specificity, and accuracy) are regressed on excess audit fees individually and controlling for the other variables in the following regression. This together with the first regression are estimated as part of a simultaneous equation system.
Shleifer and Vishny (1997)
Research Question This paper tries to explain why the textbook assumptions about arbitrage does not describe realistic arbitrage trades. Unlike the textbook description of arbitrage, which requires no capital and entails no risk, in real life, arbitrage requires substantial amount of capital to execute and cover potential losses. Small naïve investors are unlikely to possess the knowledge or the information needed for arbitrage so it is only conducted by relatively few highly specialized investors. The authors show that arbitrage tend to become ineffective in circumstances in which prices diverge far from fundamental values and arbitrageur are likely to bail out just when they are most needed. There are several reasons attributing to that. First, those with knowledge and capital are two different groups of people. Arbitrageurs will be investing on behalf of their clients who will provide the capital. This will result in agency problems between arbitrageurs and their clients. When prices move far away from fundamental values, this is the best opportunity for arbitrage but the clients who do not understand the opportunity will be distrustful of arbitrageurs and will withdraw their money. Thus, arbitrage will have limited effectiveness in eliminating market inefficiency when prices diverge far from fundamental values. Contributions This paper contributes to the literature on market efficiency by highlighting the limited effectiveness of arbitrage in brining prices to fundamental value. It corrects the simple textbook understanding of arbitrage and shows that capital requirements, risk, and agency problems can impede its effectiveness. This paper provides a better context for understanding anomalies, why they exist and why arbitrary cannot eliminate them. For example, the existence of the glamour/value anomaly can be explained by arbitrageurs liquidating their positions just when they are needed to correct the mispricing.
Ball, Kothari, and Robin (2000)
Research Question This study compares common-law countries with code-law countries to see whether the underlying institutional underpinning affect accounting practices in the country. They hypothesize that code-law countries have comparatively strong political influence on accounting at the national and firm level, and governments establish and enforce national accounting standards with inputs from major political groups. This forms more of "stakeholder" governance model than the "shareholder" governance model that is typical of common-law countries. In comparison with the more politically determined accounting practices in code-law countries, accounting properties in common-law countries are determined primarily in the private sector with inputs from shareholders. The accounting properties arising from the shareholder governance model are going to entail properties of asymmetric conservatism. Indeed, they find that common-law countries are more timely in incorporating economic losses while code-law countries rely more on institutional features to resolve information asymmetry. Method The authors proxied for political/institutional influence by classifying countries into code-law systems and common-law systems. The common-law countries include Australia, Canada, UK, and USA and the code-law countries include France, Germany, and Japan. The main specification follows the Basu model and regressed net income on stock returns, controlling for whether the returns is positive or negative. RD is a dummy variable that takes the value of one if the return is negative and zero otherwise. β2 captures the incorporation of positive economic income and β3 captures the incorporation of negative economic income. The regressions are estimated at both the individual country with observations pooled across time, and using annual Fama MacBeth cross-sectional regressions for individual countries. A pooled-country regression is also used to test for differences among countries. A secondary sample of 18 other countries yields similar results. Contribution This study contributes to the literature on international corporate governance and the literature examining differences in accounting properties across countries. It focuses on the implications of the country's institutional features on accounting properties.
Lambert, Leuz, and Verrecchia (2007)
Research Question This study develops a framework that links the disclosure of accounting information to the firm's cost of capital. Using a model of multi-security economy that express the CAPM in terms of cash flows rather than returns, they show that improvement in information quality can affect the firm's nondiversifiable risks (a firm's beta factor is a function of the information quality). Information quality can directly affect the cost of capital by affecting market participants' perceptions about the distribution of future cash flows and indirectly affect on the cost of capital by influencing real decisions. The direct effect is predicated on disclosure quality changing the assessed covariances between a firm's cash flow and other firms' cash flows. The indirect effect is predicated on higher quality disclosures changing the firm's real decisions, which change the ratio of a firm's expected future cash flow to the covariance of these cash flows with the sum of all firms' cash flow changes.
Schoar and Zuo (2017)
Research Question This study documents the effect of economic conditions on the CEO's long-term career and management styles. It finds that CEOs who begin their career during recessions tend to have more conservatism styles, in which they are more likely to cut costs, invest less in capital expenditure and research and development, and lower leverage and working capital. These differences in management style is result of a combination of the "recession effect" and differences in firm characteristics as firms endogenously select CEOs with the desired management style. The recession CEOs tend to hold fewer positions but reach CEO more quickly. They tend to work for smaller firms and receive lower total compensation as CEOs. There are two potential channels through which the recession could potentially shape the CEO's management style- "general recession channel" and "firm specific channel. The general recession channel refers to the recession environment teaching CEOs a different set of skills and attitudes (such as managerial techniques to preserve resources or cut costs). The firm specific channel works through its effect on the CEO's initial job placement. During a recession, CEOs may have trouble placing into traditional employers and thus opt for the smaller or private firms. The characteristics of the firms where they begin their career could influence their management. The evidence suggests that the firm-specific channel is more important in explaining CEO management styles. Contribution This study contributes to the literature on managerial behavior by looking at the effect of economic conditions on the CEO's management style. This is along the same line of research as Malmendier, Tate, and Yan (2011) and Graham and Narasimham (2004) but focuses on CEO's entrance to the job market rather than their upbringing (or when they are already CEO when the recession hit). Method The sample for this study comes from Execucomp, which is then merged with other dataset of CEO career profiles and demographic characteristics. Following prior studies, CEOs are required to have worked for at least three years at a firm. The CEO's starting date is proxied by his birth year plus 24. This avoids the issue of smart CEOs strategically delaying entering the labor market during a recession. A CEO is coded as a "recession" CEO if he started his career during a recession. Recession CEOs are then compared with non-recession CEOs along a number of dimensions- time to CEO, age to CEOs, average tenure, number of firms they worked for etc, controlling for industry fixed effects (one digit SIC) and decade fixed effects. To test for whether the CEO's management style is shaped by the "general recession" channel or the "firm specific" channel, the authors rerun the regressions controlling for the characteristics (size of firm and private vs public) of the manager's first job. The last part of the paper examines the effect of CEO's early career experience on their management style. The outcome variables include the firm's capital expenditure, R&D, SG&A, leverage, working capital, sales growth, effective tax return, return volatility, and ROA. All regressions are controlled for decade fixed effects, firm fixed effects, and industry-year fixed effects.
Cohen, Dey, and Lys (2008)
Research Question This study examines the effect of SOX on firm's earnings management behavior. It shows that firms in the post-SOX era have shifted from accrual-based earnings management to real earnings management. This shift is consistent with a greater need to avoid detection of accrual-based earnings management in the previous periods, inducing managers to instead employ real earnings management (which is harder to detect). In contrast, the pre-SOX era is characterized by increases in accrual-based earnings management, which is associated with contemporaneous increase in option-based compensation. The different components of option-based compensation are shown to have different implication for discretionary accruals. While new options are negatively associated with income-increasing discretionary accruals, unexercised options are positively associated with income-increasing discretionary accruals. Contribution Consistent with Graham et al. (2005), the authors corroborate the claim that managers switched from accrual-based earnings management to real earnings management following SOX. Managers are motivated by benchmarks such as last year's earnings, consensus analyst forecast, and the zero earnings threshold. They also motivated by their earnings-based compensation contracts. One motivation for this study is the series of highly publicized accounting scandals prior to the passage of SOX. The authors investigate this period to find that it is not just a few highly publicized event but rather an environment of widespread manipulation of accrual-based earnings. Method The sample period for this study covers the pre-SOX period from 1987 to 2001 and the post-SOX period from 2002 to 2005. The period from 2000 through 2001 is classified as the SCA period, or the period that lead to the passage of SOX. Discretionary accruals is estimated using the modified Jones model, which is calculated as below using all firm-year observations in the same 2-digit SIC code. Unlike Dechow et al. (1995) which estimates the modified Jones model in a time series setting, the model is estimated cross- sectionally here. The coefficient estimates from the regression as used to estimate the firm-specific normal accruals for the sample firms: Discretionary accruals is simply the difference between total accruals and the fitted normal accruals: DA=TA/Assets-NA. The authors compute the absolute value of discretionary accruals as the proxy for accrual-based earnings management. The proxies for real earnings comes from Roychowdhury (2006) and include abnormal levels of cash flow from operations, discretionary expenses, and production costs. Firms that manage real earnings management are likely to have unusually low cash flow from operations, unusually low discretionary expenses, and unusually high production costs. The authors computes these three proxies individually and together as RM_PROXY, the sum of the three standardized variables. See the Roychowdhury (2006) summary for the estimation of these proxies.
Kothari, Mizik, and Roychowdhury (2016)
Research Question This study examines the role of accrual vs real earnings management in inducing overvaluation in seasoned equity offerings. The authors find that real earnings management, as oppose to accruals, could more reliably induce overvaluation since it is more opaque. The overvaluation is manifested in the form of future stock return reversal as the overvaluation is corrected. Similar to prior studies, they also look at operating performance of SEO firms in the years following the SEO. They find that firms using real earnings management to overstate earnings experience negative operating performance in each of the three years following SEO. Using accruals to overstate earnings results in a less severe reversal. Contribution This paper contributes to the literature on accruals vs real earnings management by studying the extent to which the market is able to unravel earnings management. In contrast to Cohen and Zarowin (2010) which find earnings manipulation to be associated with future earnings performance, this study focuses on the opacity of earnings management in terms of the extent to which the market is able to detect earnings management. Method The authors use abnormal R&D and abnormal accruals as proxies for real earnings management and accruals management, respectively. Firms are sorted based on the signs of abnormal R&D, abnormal accruals, and abnormal ROA into eight groups, as shown below. Groups 1 through 4 are firms with high earnings that may have used accruals and real earnings management. The authors estimate abnormal R&D using the following model: Where market adjusted R&D series is regressed on lagged R&D. Salest-1 is the value of the sales in the period t-1. Firms-specific effects are controlled by differencing the annual R&D expenditure by the cross-sectional mean for that year. Year-specific effects are controlled for by differencing annual deviation of R&D from the cross-sectional mean by the corresponding deviation in the previous year. The abnormal R&D is obtained by subtracting the firm-year residuals of the regression by the mean value of the residual across all years for the corresponding firm. Total accruals is estimated using the modified Jones model below: The abnormal is estimated using the following model: Where the ROA is regressed on the lagged ROA. The abnormal ROA is the difference between the actual and predicted ROA. Abnormal stock returns is estimated by matching sample firms to control firms of similar size and book-to-market ratios on post-issuance characteristics. Control firms are chosen among all firms in the same two digit SIC group and not issuing a SEOs with a market value of equity between 70% and 130% of that of the sample firm.
Hirshleifer, Low, and Teoh (2012)
Research Question This study examines whether overconfident CEOs can help firms become more innovative. Prior studies have found mostly negative consequences of overconfidence. Given the preference for unbiased beliefs, it is puzzling why firms employ overconfident managers and allow them to make investment and financing decisions. This study argues that overconfident CEOs do actually provide value for the firm by better leveraging growth opportunities and making more innovative investments. This has implications mostly for firms in industries where innovation is important. They find that overconfident CEOs have higher stock return volatility, which could be the result of riskier projects. These CEOs make greater investments in R&D and are associated with higher level of firm innovation, as proxied by the number of patents and citations counts. Contribution This study contributes to the literature studying manager's psychological biases. These studies mostly focus on how manager's psychological biases affect various firm outputs. Malmendier and Tate (2005, 2008) have shown that CEO's overconfidence can result in investment distortion. Hribar and Yang (2011) find that overconfident managers are more likely to issue optimistically biaed forecasts. This study following the same approach except it documents the potential benefits of overconfidence. Method To measure the CEO's level of confidence, they used two different proxies. The first proxy is based on whether CEOs exercise their stock options when the option is in the money. This is a commonly used yet flawed proxy that was originally used in the Malmendier and Tate series of papers. The CEO is classified as overconfident if his did not exercise his option that was at least 67% in the money. The average moneyness of the option is calculated as the stock price divided by the estimated estimated strike price (calculated as year-end stock price minus the average realizable value) minus one. The average realizable value is calculated as the total realizable value of the option divided by the number of options. The CEO is identified as perpetually overconfident once he is identified as overconfident. The second proxy they used for overconfidence is based on textual analysis of press articles. They retrieved press articles regarding the firm-CEO for each year and looked at the number of articles that use dictions in the "confident" categories versus articles that use dictions in the "cautious" categories. The CEO is classified as confident is the number of articles using confident terms exceed the number of articles using cautious terms. The authors use several measures to proxy for the firm's level of innovation. First, they look at the amount of resource the firm put into innovation. This is calculated as R&D scaled by book assets. Next, they look at the number of patents applied for during the year using data from the NBER patent database. Finally, they look at the total subsequent citation count for these patents to get an idea of the importance of these patents. To adjust the sample for time truncation bias, they adjusted the citation count of each patent by multiplying it with the weighing index from Hall, Jaffe, and Trajtenberg (2005). Another approach is to scale the citation count by the average citation count of all patents in the same technology class and year.
Roychowdhury and Sletten (2012)
Research Question This study explores the role of earnings reporting process in limiting delays in the release of bad news. The prospect of having an earnings announcement forces managers to either disclose the information voluntarily or face the prospect of releasing it during earnings announcements. The authors compare earnings informativeness in bad news and good news quarters and find that earnings informativeness is higher in bad-news quarters than in good-news quarters. The results are robust to controlling for various factors including fixed effects. The differential informativeness is more pronounced for firms that do not voluntarily disclose the news, among firms with greater information asymmetry between managers and shareholders, and when managers are net insider sellers. The differential earnings informativeness also varies across quarters, with a peak in the third quarter and a slight decline in the fourth quarter, suggesting that the annual audit and more information gathering by market participants in the fourth quarter may prompt managers to voluntarily disclose more before earnings announcements. Contribution One of the contribution of this paper is in bringing together the various streams of thoughts regarding the role of the earnings reporting process- that of conveying new information to investors and confirming information that already reached the market through other channels (Ball et al. 2011, Beyer et al. 2010). Earnings announcements plays a confirmation role when they force managers to disclose bad news voluntarily. On the other hand, if managers do not disclose earnings (bad news) voluntarily, earnings announcements serves the role of releasing bad news to the market. Method The authors look at returns in the three days around quarterly earnings announcement (EAR) to gauge the incremental news released by earnings. Specifically, they look at the market adjusted buy-and-hold returns over the three-day earnings announcement window. News released during the earnings quarter (RET) is captured by the market adjusted buy-and-hold returns starting two days after the earnings announcement of the previous quarter to the day after the announcement for the current quarter. The non-earnings announcement returns is calculated as (1+RET)/(1+EAR)-1. News that reaches the market during this three-day announcement window is compared with news released during the rest of the announcement quarter to see whether the earning informativeness during announcements is greater. Specifically, earnings informativeness (NEWS_RATIO) is defined as 100 times the ratio of the absolute value of the three-day announcement return (EAR) to the absolute value of the non-announcement return (NEAR). NEWS_RATIO is winsorized at the 1st and 99th percentiles. To measure information asymmetry, the author used five different proxies: size (logarithm of market value of equity), number of analysts covering the firm (logarithm of 1 plus analyst following), institutional ownership (percentage of stocks owned by institutions), idiosyncratic return volatility, and the bid-ask spread. Idiosyncratic return volatility is calculated as the volatility of daily firm returns in excess of market returns in the preceding quarterly period, excluding earnings announcement.
Lewellen (2010)
Research Question This study gives an overview of the literature on accounting anomalies and fundamental analysis covering five main topics: distinguishing between risk and mispricing explanations for return anomalies, estimating the implied cost of capital, inferring investor's perception of the earnings process, understanding the importance of trading costs and firm size for asset pricing tests, and improving the construction of trading strategies using ex ante measures of risk. Risk vs mispricing While the literature have produced strong t-stats for the various cross-sectional predictors of returns (earnings, cash flows, accruals, valuation ratios, asset growth), the main challenge is in interpreting the results and whether they are due to risk or mispricing. Thus far, asset pricing theory have not yet come up with a perfect measure of risk and expected returns and Fama's joint-hypothesis problem remains to be an issue. Overall, it is difficult to distinguish between risk and mispricing with current methodology. Richardson, Tuna, and Wysocki (2010) point out three alternative tests to distinguish between risk and mispricing, all of which Lewellen dismissed as flawed. The implied cost of capital Lewellen believes that estimates of the implied cost of capital is likely to be noisy and biased. Realized returns, dismissed by some as unreliable, actually provide an unbiased estimates of true expected returns. The implied cost of capital is difficult to validate empirically and the literature provides little evidence that the implied cost of capital is even positively correlated with future returns. The implied cost of capital, at best, represents the long-run internal rate of return for holding the stock not the short-run expected return. In cases where the short-run expected return moves in opposite direction to the long-run expected return, the implied cost of capital would prove to be a poor measure of expected returns. Lewellen question whether the implied cost of capital is even useful for corporate managers, investors, or researchers. The discount rate used for capital budgeting should be determined solely by the risk of the project and not the implied cost of capital which may not really represent the returns required by investors. Mishkin Tests Mishkin tests are often used to assess whether investors' perceptions of the earnings process differ from the true time-series proerpties. Transaction cost and firm size In contrast to Richardson, Tuna, and Wysocki (2010)'s view that transaction costs prevents anomalies from being exploited in practice and preclude the conclusion of market inefficiency, Lewellen believes that anomalies provides evidence against market efficiency regardless of whether they can be exploited in practice. Transaction costs can be crucial for understanding the economic significance of the anomaly.
Kothari, Shu, Wysocki (2009)
Research Question This study investigates whether managers delay disclosures of bad news relative to good news. Managers have a range of incentives to delay disclosures of bad news. These include career incentives (withholding bad news may allow them to eventually bury the bad news), the long horizon effect (promotions within and outside the firm), and compensation incentives (bonus and stock options tied to performance). On the other hand, managers may be incentivized to disclose bad news promptly if they are concerned about litigation risk or their reputation. To better understand the asymmetric disclosure behavior of good news versus bad news, the authors examine the magnitude of the stock price reaction to two common corporate disclosure events- dividend changes and voluntary management earnings forecasts. Results The authors find that the magnitude of the dividend change is asymmetric between increases (25%) and decreases (47%). The associated five-day market reactions around a dividend increase is 1.5% while the reactions to dividend decreases are much larger at -2.7%. The more pronounced market reaction to dividend cut is suggestive or an accumulation of bad news before the dividend cut. Similar to the case of dividend changes, the five-day market reaction to announcement of bad news earnings forecasts (4.7%) is larger in magnitude than the market reaction to the announcement of good news earnings forecasts (-8.3%). The asymmetric market reaction to positive and negative news is mitigated considerably in the post-Reg FD period, consistent with Reg FD leveling the playing field with respect to leakage of good and bad news prior to management forecasts. Next, the authors examine how manager's incentives moderate the asymmetric market reactions to good and news. They find that the asymmetry in the market's reaction to good versus bad news is mitigated in situations where there is high litigation risk but is exacerbated by high information asymmetry between managers and investors, manager's career concerns (when firms are in financial distress, managers face highly risk of turnover), and when managers having higher stockholdings. Contribution This study contributes to the literature on disclosures by documenting the asymmetric disclosure of positive vs negative news. Prior studies have interpreted the asymmetric market reaction to positive versus negative news as evidence of bad news disclosures being more credible or managers. In contrast, this study interpret the evidence as indicative of managers delaying the disclosure of bad news to investors. Method The authors first investigate whether there is an asymmetric market reaction to announcement of dividend decreases compared to dividend increases. Dividend is defined as the percentage change in dividends. Only those with greater than 1% and occur after one year of a stable dividend pattern are kept. Outliers at (1, 99) percentile are excluded. The market reactions to the dividend change is calculated as the five-day cumulative abnormal return around each announcement date, where the abnormal returns is adjusted using the value-weighted market return. Then, the authors examine whether there is an asymmetric market reaction to management forecasts of bad news versus good news. The sample for management forecasts comes from the First Call database for the period 1995 to 2002. The news in management's earnings forecast is defined as the difference between management's quarterly EPS forecast and the analyst consensus forecast. Only those with greater than 1% magnitude, and analyst forecast greater than 5 cents per share are kept.
Durtschi and Easton (2005)
Research Question This study is motivated by the Burstahler and Dichev (1997) paper, which use the discontinuity of the frequency distributions of scaled earnings at zero to suggest that managers manipulate earnings to avoid decreases and losses. In contrast, this paper finds that the frequency distribution of net income, basic earnings per share, and diluted earnings per share do not experience a discontinuity at zero. In fact, there are more observations with a one-cent per share loss than one-cent per share profit around the zero frequency distribution. They attribute Burstahler and Dichev (1997)'s results to the use of the deflator, sample selection criteria that leads to differential selection of observations to the left and right of zero, and differences in the characteristics of firms to the left and right of zero. Contribution This paper contributes to the literature on earnings management by directly refuting the results of Burstahler and Dichev (1997). It provide an alternative explanation arising from sample selection criteria and the use of the deflator. This will have implications for other studies using the shapes of the frequency distributions of the earnings metrics as evidence of managerial misconduct. Results In Burstahler and Dichev (1997) as in most studies, the deflator is a common measure used to homogenize firms. The implicit assumption in using a deflator is that it does not distort the underlying distribution of net income. The authors, however, find this assumption to be false. Deflating a variable by price is problematic as the market prices firms reporting a profit differently from firms reporting a loss. Thus, they hypothesize and find that the observed discontinuity in the distribution of scaled earnings is caused by the deflator. Further replicating Burstahler and Dichev (1997)'s results on the discontinuity in the frequency distribution of earnings changes, they find earnings changes and earnings are highly correlated (spearman correlation 0.39) and that the discontinuity can be similarly explained by the deflator and sample selection bias. Sample selection criteria requiring beginning-of-price prices also played a key role in causing the discontinuity. They find that firms with small losses are more likely to have missing beginning- of-year price than firms with small profits. Thus, more firms with small losses are deleted from the sample than firms with small profits. The IBES dataset also contains less firms with small losses than firms with small profits. Analysts tend to cover only larger profitable firms while neglecting loss firms. These factors in combination explain the discontinuity. The shape of the analysts' forecast error can also be explained by analyst optimism. The authors show that analyst' forecast errors tend to be greater for optimistic forecasts than pessimistic forecasts. This means that on average, firms with positive earnings will have a forecast error that is much smaller than firms with negative earnings. In other words, small positive earnings (positive forecast error) will cluster around zero while small negative earnings (negative forecast error) will be far from zero leading to a discontinuity. Method The main method is the same as that of Burstahler and Dichev (1997) except the use of the deflator and the sample selection criteria. To separate out the effect of the deflator, they looked at the distributions of earnings per share rather than deflated earnings. To separate out the effect of the sample selection criteria, they looked at the distributions of earnings per share separately for firms with beginning-of-year price and for firms without beginning-of-year price (and the frequency distribution of net income for all firms versus the subsample that has beginning-of-year price available.
Cohen and Zarowin (2010)
Research Question This study looks at real and accrual-based earnings management activities around seasoned equity offerings (SEOs). While prior studies have mostly focused on accruals management, this paper shows that the post-SEO operating under-performance is due to not just to accruals but also real manipulation of the firm's operations. The decline in post-SEO performance due to real earnings management is much more severe than that of accruals. The firm's tradeoff among accrual versus real earnings management is a function of the cost of doing so (the level of scrutiny by auditors, the potential penalties of detection in the form of litigations, and difficulties of achieving a given earnings target). Firms in higher litigation industries, and with auditors belonging to the Big 8 and having longer tenure exhibit a greater tendency to use real earnings management around SEOs. Contribution This study corroborates a series of studies documenting manager's use of multiple earnings management strategies and their preference for real earnings management over accrual-based earnings management (Zang 2006, Graham et al. 2005, Cohen et al. 2008). This is the first study to examine whether and how firms practice real earnings management around SEOs. The authors provide the first empirical evidence on the deleterious effects of real earnings management. Consistent with prior beliefs, they find that real-earnings management have more severe consequences than accrual-based earnings management. Method The sample consists of 1511 SEO events over the 1987 to 2006 period. Accruals is estimated following the Collins and Hribar (2002) approach of using SFAS no. 95 statement of cash flow data. Discretionary accruals is calculated as the difference between total accruals and fitted normal accruals, following the same procedure as the previous paper. Real earnings management variables include abnormal cash flows from operations, discretionary expenses, and production costs. These variables are also measured using the same procedures as the previous paper and not repeated here. A comprehensive metric for real earnings management is created by combining all three variables. Abnormal discretionary expenses and abnormal cash flows from operations are multiplied by negative one before being added to the abnormal production cost. The authors employ a two-stage Heckman model to control for firms' self-selection to manage earnings. In the first stage, the authors estimate a parsimonious selection model to explain a firm's decision to manage earnings. Specifically, they estimate the following annual cross-sectional maximum likelihood model, where the dependent variable is whether the firm practiced earnings management in that year. The explanatory variables include whether the firm beat analysts' earnings forecasts in the past four quarters (HAB_BEAT), the natural logarithm of the number of shares outstanding (SHARES), the natural log of analyst coverage (ANALYST), the average percentage of total compensation that is in the forum of bonus (BONUS), and proportion of option- based compensation as a proportion of total compensation (OPTION). Then, conditional on the first stage, they analyze in the second stage the factors determining whether the firm will chose real earnings management over accruals based ones. A firm's decision to use real or accrual-based earnings management depend on its ability to use accrual-based earnings management and the cost of doing so. The firm's accruals management flexibility is proxied by its net operating assets. The cost of managing accruals is proxied by the firm's auditor characteristics, tenure, and litigation risk.
Bernard and Thomas (1990)
Research Question This study provides a direct and thorough test of the implication of current earnings for future earnings. The authors show that earnings errors based on a naïve random walk model are correlated through time and that the market fail to incorporate serial correlation. Consequently, you can predict the abnormal returns for the four subsequent earnings announcements using current unexpected earnings. The findings are consistent with prices failing to reflect the extent to which the time-series behavior of earnings deviates from naive expectation. The authors document a positive relation between unexpected earnings for quarter t and next three subsequent abnormal returns, but a negative relation between unexpected earnings for quarter t and abnormal return for quarter t+4. Contributions This paper contributes to the literature on the post-earnings announcement drift by showing how current earnings can lead to drift in future earnings announcements through the serial correlations in unexpected earnings based on a naïve random walk model. The results also provide evidence against the efficient market hypothesis. Method/Result The authors test the serial correlation in SUEs by dividing firms into SUE deciles and forming ten portfolios for each calendar quarter based on the SUE deciles. The mean sized adjusted abnormal return is measured around subsequently quarters' earnings announcement (-2, 0). Observing the associations between the SUE portfolios and the abnormal returns for each of the four subsequent quarterly earnings announcements, they find a positive relation (but declining) between each of the first three announcements and a negative relation with the fourth announcement. Thus, the SUE in current quarter can be used to predict abnormal returns for the next four quarters. Partitioning the data by firm size shows that the patterns are present for each size group, though the effects are stronger for small firms. Partitioning on the quarter of portfolio formation also find similar results. However, when the first or second sequent announcement pertain to the first quarter of the fiscal year, the three-day abnormal returns are only about half as large before.
Shroff, Verdi, and Yu (2014)
Research Question This study seeks to examine how information environments affect the multinational corporations' investment behavior. Due to the endogenous relationship between the information environment and firm's investment, the author examine the quality of the firm's external information environment rather than the firm's own financial reporting quality. They show that the sensitivity of subsidiary's investment to growth opportunities is higher in country-industries with a more transparent information environment. For every standard deviation change in growth opportunities, there is a 3% difference in investment between subsidiaries in rich versus poor information environments. In addition, the effect of the information environment on the sensitivity of investment to growth opportunities is greater when there are greater cross-border frictions between the parent and the subsidiary. Likewise, when the parents are relatively more involved in their subsidiaries' investment decision-making process, the effect of the information environment on the sensitivity of investment to growth opportunities is stronger. Contributions This study contributes to the literature on information frictions within multinational corporations by showing how the external information environment can serve as a control system to reduce information frictions. Prior studies have shown how information frictions and moral hazard can results in inefficient allocation of resources. Such frictions can arise from geographic dispersion, cultural and language differences, and differences in legal systems. In contrast to prior studies that focus on internal mechanisms, this study focus on how the external information environment can help multinational corporations mitigate information frictions within the firm. Method The authors measured the sensitivity of the firms' investment to the local growth opportunities on the premise that higher sensitivity is desirable (lower information frictions and agency problems). They used asset growth to proxy for investment and price to earnings ratio of the country-industry to proxy for growth opportunities. Following prior research, the authors used median analyst coverage, press coverage, and the degree of earnings transparency in the country-industry of the subsidiary firm as proxies for the quality of the external information environment (Beyer et al. 2010). The main specification regresses investment on growth opportunities and includes an interaction term between growth opportunity and the quality of the external information environment. To mitigate concerns about measurement error in the proxy for growth opportunity, the authors include various controls such as country fixed effects, country fixed effects*growth opportunities, parent-subsidiary country-pair fixed effects, MNC fixed effects, and controls for parent's ownership structure, internal capital markets, and reliance on domestic banking credit.
Roychowdury (2006)
Research Question This study shows evidence of managers using various earnings management methods in order to meet earnings benchmarks. These methods managers providing price discounts to temporarily increase sales, reducing discretionary expenditures to improve margins, and overproducing to lower the costs of goods sold. This study is unique in that prior studies looking at real earnings management only focus on investment activities whereas this study presents evidence of managers manipulating "operational" activities. Firms are also less likely to practice real earnings management when the firm has high institutional ownership. Contribution This study contributes to the literature on earnings management by providing evidence consistent with managers manipulating real activities to meet the zero threshold. The results are consistent with studies documenting a discontinuity in earnings distributions around the zero threshold (Burgstahler and Dichev 1997). The study is also consistent with the literature documenting managerial manipulation through sales increase, changes in shipment schedules, and delaying R&D and capital expenditures (Healy and Wahlen 1999, Dechow and Skinner 2000). The evidence is consistent with Graham et al. (2005) which report managers' willingness to practice real earnings management to meet earnings targets. Method The author focuses on three managerial manipulation methods- sales manipulation, discretionary expenditure manipulation, and production cost manipulation. Sales manipulation refers manager's attempt to accelerate the timing of sales through price discounts or more lenient credit terms. This could result in temporary increase in sales volume that aggravate current period's production costs and lower the cash flow from operation. Discretionary expenditures include items such as R&D, advertising, and SG&A. Since these expenditures do not generate immediate revenues and income, managers have the incentive to reduce them to meet earnings targets. Finally, production costs could be manipulated by altering the level of production. Managers could order more production than necessary to achieve better operating margins (through reduction in fixed costs per unit produced) at the expense of higher production costs and holding costs. The author hypothesizes that firms that manipulate earnings via these three channels will exhibit the following characteristics: low cash flow from operations, unusually low discretionary expenses, and high production costs after controlling for sales levels. He also hypothesize that abnormal production costs are higher for firms in manufacturing industries, firms that have debt outstanding (liable to debt covenant contracts that make losses undesirable), firms with high market to book (firms with high growth opportunities), firms with high current liabilities (managers are more worried about the negative reaction of suppliers), firms with high levels of inventories and receivables as a percentage of total assets (more flexibility to undertake real activities manipulation), and firms with low institutional ownership. Cash flow from operations is calculated below following Dechow et al. (1998) for every industry and year, where St is the sales during period t and ∆St= St-St-1. For every firm year, the abnormal cash flow from operations is the actual CFO minus the CFO estimated from the fitted value of the model. The model for normal COGS, inventory growth, production costs, and discretionary expenses are estimated using the regressions below. Discretionary accruals are expressed as a function of lagged sales since sales may be managed upward to increase reported earnings.
Gerakos, Linnainmaa, and Nikolaev (2016)
Research Question This study shows that a cash-based operating profitability measure that excludes accruals can outperform other measures of profitability and subsumes accruals in predicting the cross section of average returns. This measure can outperform other profitability metrics including operating profitability (including accruals), gross profitability and net income. The authors find that a strategy using this measure can greatly increase the Sharpe ratio. A five- factor model combining market, size, value, and momentum factors with a cash-based operating profitability factor yields the highest Sharpe ratio. It can also explain expected returns as far as 10 years ahead, suggesting that the results are robust and not due to mispricing of earnings or its cash and accruals components. The authors provides an explanation for the accrual anomaly. Firms with lower accruals earn higher future returns simply because accruals is negatively correlated with the cash flow component. The accrual anomaly disappears when controlling for cash-based operating profitability and but adding accruals to firms sorted on cash-based operating profitability adds no incremental value. The author conclude that the increase in profitability due to accruals has no relation with the cross section of expected returns and investors are better of just using the cash- flow based operating profitability. Contributions This study helps to explain the persistence of the accrual anomaly. The finance literature have often placed great emphasis on cash flow while the accounting literature documents the role of earnings in predicting future returns. In this paper, the authors seem to support the finance camp by arguing that the anomaly is not due to the mispricing of earnings or its two components but rather it is driven by its negative association with cash flow. They find that when controlling for the cash-based profitability measure, accruals no longer have predictive power for the cross section of stock returns This study also provides important insights into the profitability factor by showing that cash flow based operating profitability measure is superior to other profitability metrics. Method The authors derive the cash based operating profitability measure by subtracting operating profitability by changes in accounts receivable, inventory, prepaid expenses, deferred revenue, accounts payable, and accrued expenses. The operating profitability is calculated as sales minus cost of goods sold minus selling, general, and administrative expenses (excluding research and development expenditures). The authors use the Fama-MacBeth regresions to conduct the empirical tests. The regressions are estimated using monthly data from 1963 to 2014. The authors also explore a model combining operating profitability, accruals, or cash-based operating profitability to the Fama-French three-factor model to see whether the accruals have any relations with the cross-section of stock returns.
Jorgensen, Lee, and Rock (2014)
Research Question This study uses a natural experiment to address three irregularities related to earnings per share. The first irregularity pertains to unusual patterns in the last digit of reported EPS. Thomas (1989) finds that last digit of reported EPS is more likely to be zero or five than nine for profitable firms. The second irregularity refers to discontinuities around the zero threshold of the distribution of earnings and earnings changes (Durstchi and Easton 2005). The third irregularity relates to the findings of Das and Zhang (2003) that the third decimals of EPS are more likely to round up rather than round down. The natural experiment is based on changes in accounting regulation SFAS 128, which produced two overlapping years in which EPS is both reported in real-time and restated retroactively. The authors compare the three irregularities for reported earnings versus restated earnings and find that the irregularities predominantly exist in the reported earnings, which suggest that earnings management did play a role in causing these irregularities. Contribution This paper contributes to a series of studies examining irregular patterns in EPS by extending these results using post-SFAS 128 EPS. By using a better research design and exploiting the overlap period if which multiple EPS measures are reported, it is better able to attribute these irregularities to earnings management. The evidence are support the analysis in Burstahler and Dichev (1997) which find firms more likely to manage earnings to report small positive earnings versus small negative earnings. It argues against Durstchi and Easton (2005), which tries to attribute the discontinuity to sample selection criteria and the use of the deflator. Method This study employ a natural experiment to study the differences in patterns of cross-sectional distributions. Following Berger and Hann (2003), this study exploits changes in the mandatory reporting of EPS surrounding SFAS 128, which requires firms to report Diluted EPS as oppose to Primary EPS. For the two years immediately preceding SFAS 128, firms are also required to retroactively restate their EPS according to SFAS 128. Thus, we have firms reporting both the Primary EPS in real time and the Diluted EPS as "restated" EPS. The authors hypothesize and find that the real-time reported EPS exhibit more evidence consistent with earnings management across the three irregularities than restated EPS. Since the exact same firm-years and underlying cash flows are used for both distributions, the results cannot be attributed to scaling or sample selection bias. The sample consists of firms with EPS data in the Compustat Xpressfeed North America database from January 1980 to December 2010. Primary EPS excluding extraordinary items are obtained for the APB 15 reporting era (January 1980 to November 1997) and Diluted EPS are obtained for the SFAS 128 period (December 1997 to December 2010). For the overlap period from December 1995 to November 1997, both Primary EPS and Diluted EPS are obtained and classified as the "as reported" measure and the "as restated" measure, respectively. 65% of the "as reported" and "as restated" observations have the same value, and the overall distributional differences are small.
Kothari, Ramanna, and Skinner (2010)
Research Question In this review paper, the authors provide a survey and economic analysis of the GAAP from a positive research perspective. They posit that the objective of GAAP is to facilitate efficient capital allocation in the economy, and as such, GAAP acts more as a system of stewardship and performance evaluation than as a vehicle for providing valuation information to investors. The stewardship and performance evaluation role of GAAP derives from the need to address agency conflicts arising from the separation of ownership and management. Income statements prepared under GAAP are intended to possess attributes such as conservatism, verifiable assets on balance sheet, and reliable measure of management performance that help keeps agency conflicts to a minimum. Valuation-relevant information in financial statement is simply a byproduct rather than the primary motivation of GAAP. The authors then discuss the implications of the economic theory of GAAP. They warned against expanding fair value accounting to areas such as intangibility, arguing that this creates too much leeway for management discretion and opportunistic behavior. Nevertheless, they acquiesce to the use of fair value accounting in circumstances where assets can be measured by observable prices in liquid secondary markets. They argue against the convergence between FASB and IASB and believe that competition between these two bodies would facilitate the most efficient capital allocation. Additionally, regulations should be flexible to allow accountants and auditors to determine the best practices.
Bradshaw, Drake, Myers, and Myers (2012)
Research Question In this study, Bradshaw show that analysts' earnings per share forecasts are not always superior to time-series forecasts. Although analysts' forecasts are more accurate than time series forecasts over shorter windows, they are less accurate at longer horizons, for smaller or younger firms, and when the forecast is negative. Results The authors compare the forecast accuracy of the simple random walk forecast based on annual EPS with analysts' annual earnings forecast and find that while the mean analysts forecasts are superior at one-year-ahead, they are less accurate at two and three-year-ahead. Even in cases where the analysts are more accurate than random walk forecasts, the differences in accuracy is economically insignificant. Analysts tend to do better when forecasting larger and mature firms over relatively short horizons. At longer forecast horizons, analysts' forecasts are only superior when they forecast negative or small changes in EPS. Since analysts' forecasts of one-year-ahead earnings are more accurate than two and three-year- ahead earnings, the authors used one-year-ahead earnings to extrapolate the forecasts for the two years ahead as in a naïve random walk extrapolation of analysts' near-term earnings forecasts. He finds that this naïve extrapolation provides the most accurate estimate of long-term earnings. Method The sample is limited to firms with available analyst forecasts. Analysts/random walk' forecast errors are calculated based on actual earnings from IBES (rather than Compustat) to make the analyst and random walk forecast errors comparable. Firms-year observations must include the prior year's EPS, actual earnings, and require at least one earnings forecast and positive earnings in the base year. For each earnings announcement, the authors collect analysts forecasts in each of the previous 36 months. The one-year-ahead earnings forecast is calculated using the first 12 months prior the earnings announcement, and similarly 12-23 months and 24-35 months for two and three-year- ahead forecasts. The one-year ahead (two-year-ahead) random walk forecast is simply the one- year prior EPS (two-year prior EPS). The three-year ahead random walk forecast is generated from year two forecast adjusted by the mean analysts' long-term growth forecast. Contribution This study contributes to the literature comparing analyst forecasts and time-series models. Unlike the prior literature, this study suggests that analyst forecasts are not always superior and that at longer horizons they are in fact less accurate. Analysts are also less skilled at predicting earnings for smaller firms.
Watts (2003a)
Research Question This paper is part one of the Watts' two part series on conservatism in accounting. Conservatism is defined as the asymmetric verification of gain vs losses. Watts offered various explanations for conservatism in accounting argues that they have important implications for regulators. These explanations include contracting, curbing managerial opportunistic behavior, shareholder litigation, taxation, and lowering the political cost of regulators and are the key to understanding why conservatism has been on the rise in recent years. Contracting explanations for conservatism 1. The contracting explanation for conservatism is centuries old. It emphasizes both formal contracts for debt and management compensation as well as managerial accounting and control. 2. Contracting explains three attributes of accounting measures- timeliness, verifiability, and asymmetric verifiability. 1) Timeliness is important because contracting parties need timely measures of performance and net asset values to evaluate managerial performance in compensation and debt contracts. Managers have limited tenure with the firm so managerial performance measures are only useful when they are timely. 2) Verifiability is important because for contracts to be enforceable in a court of law the earnings or cash flow measure need to be verifiable. 3) Without asymmetric verifiability, managers might forgo positive net present value projects, overstate earnings and assets, and make excessive dividend payments to shareholders at the expense of debt holders. a. Asymmetric verifiability is necessary because debt holders of a firm do not receive additional compensation if the firm's net assets increase but if the firm's net assets decrease to the level they can't cover the promised payments to the debtholders, they will stand to lose. b. Manager's compensation will incentivize them to bias the estimates of future cash flows upwards. The asymmetric verification requirement is therefore necessary to protect investors. c. Asymmetric verifiability can also be useful for corporate governance purpose by providing timely signals for losses and allowing the board of director to investigate the reasons for these losses in a timely fashion. Contracting explanations and the information perspective 1. Critics of conservatism have often argued that conservatism is not conservative in the sense that by delaying the recognition of gains it may depress current estimate of net assets but cause upward bias in future earnings estimates. This can lead managers to take"big bath" in which they take excessive charges and write-offs in order to overstate earnings in the future. 2. Watts counter these arguments by pointing out the future years' earnings are higher because they are "realized", consistent with the verifiability goal of conservatism. Other explanations for conservatism Other explanations for conservatism include litigation, income tax, and regulatory concerns. Conservatism is likely to reduce the risk of litigation, lower corporate taxes, and lower political cost for regulators.
Dechow (1994)
Research Question/Main Findings The objective of this study is to examine the role of accruals in improving the ability of earnings to measure firm performance as oppose to cash flows. They find that while cash flows is a relatively useful measure of firm performance for firms in steady state, earnings is a much better measure when measuring the performance of firms in volatile environments with large changes in their working capital and investment/financing activities. They also find that the usefulness of earnings relative to cash flow increases as the measurement interval decreases and as the operating cycle increases. Separating accruals into different components, the authors find that working capital accruals are the most important component in mitigating timing and matching issues in cash flows. Long- term operating accruals are influenced by the political process and are less important for mitigating the timing and matching problems. Special items are an accrual adjustment that could be used by management to manipulate earnings and they are shown to reduce earnings' ability to reflect firm performance. Contributions This paper tries to understand whether and why earnings is a superior measure of firm performance than cash flows. Unlike prior research that focus on whether the unexpected component of earnings or cash flows can incrementally explain abnormal stock returns, this study is focused on comparing earnings and cash flows with each other. Method The sample consists of all firms listed on the NYSE or the ASE with available data to calculate earnings per share, cash from operations per share, and net cash flow per share. Outliers at the (1,99) percentile are excluded. All variables are on a per share basis and are scaled by beginning- of-period price. The authors used two different measures of cash flow- net cash flow and cash from operations. Net cash flow have no accruals adjustments at all. Cash from operations includes accruals that are long-term in nature but do not include accruals associated with changes in firms' working capital requirements. Thus, it only partially mitigates the timing and matching problems associated with the firm's investment and financing activities. Stock returns are calculated as the buy-and-hold return minus the value-weighted market index. Whichever regressions of the stock returns on the cash flow/earnings that have a bigger R2 will be interpreted as the better measure of firm performance. A short measurement interval is defined as one quarter or one year.
Chan, Frankel, and Kothari (2004)
Research Question/Main Findings The objective of this study is to test whether psychological biases exemplified by representativeness and conservatism influence future price performance as had been documented in the behavioral finance literature. The authors did not find strong evidence supporting theories based on representativeness and only weak evidence for conservatism. The authors used trends and consistency of financial performance to tests the pricing implications of the two psychological biases. If investors are susceptible to representativeness bias, then consistency in firm performance over a period should incrementally influence future price performance. However, the authors did not find firms with more consistent prior performance patterns to be more mispriced or incur greater return reversals. The authors also predict that following a signal that contradicts past trends, firms with prior consistent extreme performance should experience greater return reversal than firms with inconsistent extreme performance. However, they fail to find return predictability based on trends and consistencies in past financial growth rates. They did find some evidence that investors underreact to a 1-year trend in accounting performance, which they interpreted as evidence for conservatism bias influencing security returns. However, a trading strategy based on this anomaly did not generate incremental abnormal returns. Contributions By failing to observe return predictability, this study provides evidence against the behavioral finance theories that predict mispricing based on investors' representativeness or conservatism bias, though the authors acknowledge that this could be due in part to abundant arbitrage that quickly eliminates mispricing arising from investors' information processing biases. Method The authors measure financial growth rates over 1-year and 5-year periods. The variables of interest are sales, net incomes, and operating income calculated over quarterly and annual levels. The sales growth rates are computed by summing the sales for each of the quarters in a given year minus the sum of the next four quarters and deflated by the sum of the next four quarters. This is the same method for computing growth in net income except the deflator is the assets per share. An alternative measure of growth using seasonally differenced change in quarterly performance yields similar results. Firms are classified based on trend and consistencies of growth they experience over the previously year or 5 years. For each quarter, they are ranked on the basis of each financial performance measure and divided into quintiles. A trading strategy of buying high-growth firms (top quintile) and shorting low-growth firms is employed. The authors performed a two-way consistency-growth quintile sort. Firms are first sorted into financial performance quintiles. Then, within each extreme financial performance quintile, they are further ranked by consistency of performance in the sub-periods. Those top growth quintile firms with above median growth in all quarters within the period are classified as consistent, and those with only two or fewer quarters above median growth are classified as inconsistent.
Fairfield, Whisenant, and Yohn (2003)
Research Question/Results Accruals are a component of both growth in net operating assets and as a component of profitability. Thus, the main research question of this paper is whether the accrual anomaly documented in Sloan (1996) is attributable to its role as a component of profitability or to its role as a component of growth in net operating assets. The authors posit that the accrual anomaly is the manifestation of a more general negative incremental relation one-year-ahead ROA and growth in net operating assets. The authors hypothesize that the negative incremental effect of growth in net operating assets on one-year-ahead ROA results in a negative correlation between growth in net operating assets and one-year-ahead ROA, conditional on current ROA. After disaggregating growth in net operating assets into accruals and growth in long-term net operating assets, they find that both components of growth in net operating assets have negative associations with one-year-ahead ROA after controlling for current ROA. Similar to investors overvaluing accruals, the evidence suggest that investors also overvalue growth in long-term net operating assets. The authors find no statistically significant difference in the negative relations between the two components of growth in net operating assets and one- year-ahead ROA. Contributions This study contributes to the literature on the accrual anomaly. Complementing Sloan (1996)'s finding that the accrual component of current profitability is less persistent than the cash flow component, this study show that the differential persistence and valuation of accruals can be explained by the role of accruals as a component of growth in operating assets. This study contributes to the literature on accounting conservatism by attributing the lower persistence of accruals relative to cash flows to conservative bias in accounting principles or the lower rate of economic profits from diminishing marginal returns to new investment opportunities. Method This study used a similar research design as Sloan (1996) using all firms with financial data for the 30-year period from 1964 to 1993. The main specification regressed one-year-ahead ROA on growth in net operating assets controlling for current ROA. Consistent with Sloan (1996), ROA is calculated as operating income divided by total asset. Accruals is calculated as net change in operating working capital minus current period depreciation and amortization expense. Cash flow is calculated by subtracting accruals from operating income. The growth in net operating assets is defined as annual change in net operating assets. The growth in long-term net operating assets is calculated by subtracting accruals from net operating assets. All of these variables are deflated by total assets.
Dechow, Sloan, and Sweeney (1995)
Research Question/Results This study assess relative performance of various models in detecting earnings management. They find that while all the models appear to be well specified when testing for a random sample of firms-years, the power of the test is low for earnings management of low economic magnitudes. All of the models also appear to be misspecified when applied to samples of firm-years experiencing extreme financial performance (rejection rates for the null hypothesis that earnings are not managed exceeds the specified test-level), which highlights the importance of controlling for financial performance. The misspecifications are less severe for the Jones and Modified Jones model than for the Healy model. Of the five models tested, the modified Jones model is shown to have the most power in detecting earnings management Method The tests employ the following regression framework, where DA is the discretionary accruals, which can be modeled by the Healy model, the DeAngelo Model, the Jones Model, the modified Jones Model, and the Industry model. PART is the dummy variable partitioning the data into two group based on whether earnings management is specified by the author. Xs are other relevant variables that influence discretionary accruals. This model is subject to the issues of omitted variable bias and measurement errors that leads to misspecification and type 1 and type 2 error. The authors evaluate the performance of the various models by examining the frequency at which they generate type I and type II errors. Type I error entails rejecting the null hypothesis of no earnings management when the null is true, and is generated for a random sample of firm-years and for samples of firm-years with extreme financial performance. Type II error arise when null hypothesis of no earnings management is not rejected when it if false. It is generated for sample of firm-years that have been artificially added a fixed and known amount of accruals. To achieve eternal validity, Type II error is also generated for a sample of firms that have been targeted by the SEC for earnings management. Four samples are used in the analysis: 1) Sample 1 consists of 1000 randomly selected firm-years and is designed to test situations in which the measurement error in discretionary accruals is uncorrelated with PART. 2) Sample 2 consists of 1000 randomly firms-year from a pool of firms with extreme earnings performance or extreme cash from operations performance 3) Sample 3 consists of 1000 randomly selected firm-years as in sample 1 but with accrual manipulation added. 4) Sample 4 consists of 56 firm-years that have been targeted by the SEC for overstating revenue or overstating expenses.
Bennedsen et al. (2006)
Research Questions Are in-family CEOs better than professional CEOs? Do family CEO successions result in larger decline in firm performance than unrelated manager successions? Recent studies have shown a large decline in firm performance around family CEO appointments relative to those managed by unrelated CEOs. These studies rely on purely cross-sectional variation in family-CEO status or changes in family-CEO status around management turnover, both of which are unlikely to be random. Thus, it is difficult to ascertain whether family CEOs do indeed hurt firm performance. The objective of this paper is to isolate the causal effect of family CEOs on firm performance using an IV approach. Method The data for this study comes from three different sources. The KOB dataset contains accounting and management information of all limited liability firms in Denmark as well as the names and position of executives and board members. From the Danish Civil Registration System, the authors able to obtain the personal identification of departing and incoming CEOs including name, gender, dates of birth and death, and marital history as well as the CPR numbers of parents, siblings, and children. From these datasets, they were able to identify CEO transitions, and classified the transition as either family CEO or unrelated CEO depending on whether the incoming and departing CEOs are related by blood or marriage. The naïve approach to this would be running a simple OLS regression around various CEO successions: where yi captures change in performance of firms surrounding CEO successions and FamCEO indicates whether new CEO is picked from the controlling family (0 if new CEO is unrelated to family). These are many problems with this regression. Perhaps the most glaring are omitted variables and reverse causality. One needs to make sure that succession decisions are uncorrelated with determinants of firm performance, which is unlikely. In addition, timing of the succession is endogenous. The authors use an IV approach to resolve this problem. An IV approach can help isolate the portion of the variation in the endogenous variable (CEO transition choice) that is truly exogenous to the dependent variable (performance). The authors use a novel instrument: the gender of the first-born child of the departing CEO. This is shown to be highly correlated with the choice of CEO transition but orthogonal to firm prospects. Its only effect on performance comes from the variation in CEO succession choice. The first stage to the IV estimates the probability of having an inside-family succession as a function of the CEO's first born child's gender. They find that male first-borns are 9.6% more likely to have a family CEO replacement. The second stage estimates the effect of first-born gender-induced assignment of treatment on firm outcomes. They find first-born male assignment to be negatively correlated with firm performance.
Armstrong, Jagolinzer, and Larcker (2010)
Research Questions Does CEO's equity-based holdings and compensation provide incentives to manipulate accounting reports? Contribution This study contributes to the literature on equity incentives and accounting irregularity. Prior studies generally find a positive link between CEO's equity incentives and accounting irregularity though the evidence is mix. In contrast, this study finds no evidence of a positive link but in fact, finds that executive incentives could reduce accounting irregularity can aligning the incentives of CEOs with shareholders. Prior studies find mixed evidence of a relationship even though they share similar proxies and samples. These studies use research designs that relies heavily on assumptions about the functions form of the relationship between accounting irregularities and equity incentives. The most common method is partial matching on size, industry classification, and controlling for other confounding variables in the regression. This study advocates the use of propensity score matching which is more robust to misspecification of the functional form. Method While prior studies have used Execcucomp data, which is limited to a small set of large firms and do not provide sufficient statistical power, the data for this study comes from Equilar provides a larger set of equity-holding and compensation data than Execcucomp. For the measure of accounting irregularities, they consider three different types of accounting irregularities. The first is financial restatements related to accounting manipulation. These data is obtained from Glass-Lewis & Co., which collects restatement information from SEC filings, press releases, and other public data. The second accounting irregularity they consider is whether the firm is involved in a class-action lawsuit. The database for this is provided by Woodruff-Sawyer and Co. The final accounting irregularity they consider is whether the firm was accused of accounting manipulation in an AAER. These can be identified from the comprehensive AAER listing provided on the SEC Web site and includes earnings-estimate improprieties, financial misrepresentation, and failure to adhere to GAAP. To measure equity incentives, the authors measure the portfolio delta, which is the risk-neutral dollar change in the in the CEO's equity portfolio for every percentage change in the firm's stock price. To author first partition the outcome variable, equity incentives, into five quintiles. Then, they conduct propensity score matching. The first step is to estimate the logistic propensity score model of executive incentives on observable features of the contracting environment. They used 19 different firm and CEO level variables in this logistic model (shown below). Then, they form matched pairs of the control and treatment by finding those pairs with the smallest difference in propensity score but the largest difference in CEO equity incentives. Next, they check to make sure that the covariates balance between the treatment and control sample. Finally, they examine whether the relationship between equity incentives and accounting irregularities is different between treatment and control.
Fama (1990)
Research Questions The fundamental question of this paper is whether stock return variation can be explained. The author regressed stock returns on a combination of time-varying expected returns, shocks to expected returns, and forecasts of real activity and found these variables to explain a large fraction of the variation in stock returns. The degree of explanatory power increases with the length of the holding period, as real activity contains information over other adjacent months. Fama controls for variation in expected stock returns using dividend yield, default spreads, and term spreads. Results Fama shows that stock returns (monthly, quarterly, and annual) are highly correlated with future production growth rates. The strong correlation remains when variables proxying for time- varying expected returns and shocks to expected returns are included in the regression. Contributions This study contributes to the literature on determinants of stock return variations. While prior literature has studied the three sources of return variation separately, this study is the first to examine their combined explanatory power. This study addresses the puzzle in Fama (1981) and Kaul (1987) that real activity explains more return variation for longer return horizons. Since information in a given month evolves over many previous months, production of a given month will affect the stock returns of many adjacent months. Thus, more information is captured when longer horizon returns are regressed on future production growth rates. Method The sample period covers 1953 to 1987. Returns are nominal returns adjusted for inflation rate. Continuously compounded real returns are measured over month, quarter, and year. Following Fama and French (1989), the author uses three variables to forecast returns- D(t)/V(t), DEF(t), and TERM(t). D(t)/V(t) is the dividend yield on the value-weighted NYSE portfolio (calculated as the sum of monthly dividends on the portfolio for the year preceding t and dividing by the value of the portfolio at t). DEF(t) is the default spread, defined as the difference between time-t yield on a portfolio of 100 corporate bonds and the time-t yield on a portfolio of bonds with Aaa ratings. Term(t) is the term spread, defined as the time-t difference between yield on the Aaa corporate bond portfolio and the one-month Treasury bill rate.
Collins, Kothari, Shanken, and Sloan (1994)
Research Questions This study attempt to empirically test for different explanations for the weak contemporaneous return-earnings association. Earnings, subject to accounting rules, are prone to the issue of timeliness and value-irrelevant noise. The lack of timeliness originates from the delay in accounting recognition of events occurring in the current period that do not meet the condition for recognition while the noise embedded in earnings is due to divergence between accounting estimates of the present value of expected future cash flow relative to the market's estimate. If the poor contemporaneous return-earnings association is due primarily to earnings' lack of timeliness, the authors expect to find a non-contemporaneous return-earnings association. Future earnings and return should have strong associations under the lack of timeliness hypothesis. To test for their hypothesis, they further aggregated the earnings-return data at three levels of aggregation- economy, industry, and firm level. By aggregating cross-sectionally, they proposed that the contemporaneous association between earnings and returns should strengthen under the noise-in-earnings hypothesis but not under the lack of timeliness hypothesis. Method The authors model the return-earnings relation as: Where Rt is the continuously compounded return for period t, Xt is the continuously compounded growth rate of earnings, UX is the unanticipated earnings growth rate, and ∆Et is revision in market expectations from beginning to end of period t. T is limited to three years since returns do not significantly anticipate earnings changes for more than three years in advance. Revisions in market expectations is backed out from the ex post growth rates. This model differs from previous models in its choice of earnings expectation model and deflator (lagged earnings, robust to using lagged earnings as well). Prior models simply regressed returns on realized earnings growth rates, which is subject to errors-in-variables problem that biases the regression's explanatory power downward. The anticipated component of earnings has an attenuating effect on the contemporaneous return-earnings association. To minimize the measurement error, the authors proposed several proxies for the measurement errors- earnings to price ratio, growth in investment, and future returns. Results Overall, the author did not find support for the noise-in-earnings hypothesis and that the lack of timeliness appears to be driving the poor contemporaneous return-returns association. Unanticipated component of current earnings and revisions in the market's expectations of future earnings explain 3-6 times as much of the annual return variation through time and in the cross- section as current realized earnings. The authors find an improvement in the explanatory power of the return-earnings model as future earnings and measurement error proxies are added.
Biddle, Hilary, and Verdi (2009)
Research Questions This study examines the relations between financial reporting quality and investment efficiency. Various theories and empirical evidence suggest that higher reporting quality can improve investment efficiency by mitigating information asymmetries and moral hazard and enhancing the shareholder's ability to monitor manager's investment activities. If firms with higher financial reporting quality make more efficient investments, the question then arises as to whether they do so through reducing overinvestment or underinvestment. That is the focus of this paper. Results The authors find that higher reporting quality is associated with both lower over and under investment. They also find the relations between financial reporting quality and investment to be conditional on whether the firm is more likely to over or under invest. For firms that are more likely to overinvest, they find a negative relationship between reporting quality and investment. Conversely, for firms more likely to uninvest, they find a positive relationship between reporting quality and investment. Results are the strongest using the Dechow and Dichev (2002) measure of accruals quality. Furthermore, the authors find that firms with higher financial reporting quality are less likely to deviate from their predicted investment levels. Reporting quality is negatively related to investment when aggregate investment is high and positively related when the aggregate investment is low, which suggests that that higher financial reporting quality firms are less sensitive to macro-economic shocks. To make sure that the results are not driven by the correlation between financial reporting quality and corporate governance mechanisms, the authors test whether alternative monitoring mechanisms (institutional ownership, analyst coverage, G-score index of anti-takeover provisions) are associated with investment efficiency. Adding these variables as controls do not affect the association between financial reporting quality and investment. Looking at the subcomponents of investment, they find that R&D and acquisitions have the strongest effect while the results for capital expenditures are insignificant using the Wysocki (2008) measure of accruals quality Method Financial reporting quality is defined as the precision which the firm reports information on the firm's operations. The author uses several proxies for financial reporting quality. One is the Dechow and Dichev (2002) measure of accruals quality. Another proxy is the Wysocki (2003) measure of accruals quality. To assess the forward-looking aspect of financial reporting quality, a measure of readability known as the FOG index is also used (Li 2008) The authors test for the relations between financial reporting quality and the level of capital investment conditional on whether the firm is more likely to overinvest or underinvest. The model specification is listed below. Investment include both capital and non-capital investment. FRQ is the proxy for financial reporting quality. Overl is variable increasing in the likelihood over-investment. Gov is the set of control variables for corporate governance. β1 measures the association between reporting quality and investment for firms likely to underinvest. β1+β2 measures the association between reporting quality and investment for firms likely to overinvestment.
Dechow, Kothari, and Watts (1998)
Research Questions In this paper, the authors develop a model that provides intuitive insights into the relations between operating cash flow, accruals, and earnings. The model explains how accruals can undo the negative serial correlation in cash flows to produce serially uncorrelated earnings changes. It also shows that current earnings is a better forecast of future operating cash flows than current operating cash flow and that the forecasting abilities of accruals and earnings are increasing in the cash operating cycle. Furthermore, the authors show a negative contemporaneous correlation between accrual and operating cash flows as long as the profit margin is less than twice the operating cash flow cycle. Similarly, the contemporaneous correlation between earnings and operating cash flow should be negative as long as the profit margin is less than the operating cash cycle. The author then empirically test the model using a sample of 1337 firms. Consistent with their theory, they find that current earnings provides a better forecast of future cash flows than current cash flows. Furthermore, the predictive abilities of earnings and operating cash flows to forecast future cash flows is a positive function of the firms' expected operating cash cycle. The actual and the predicted correlations are quite close for the most part. Overall, the evidence suggests the model has some statistical explanatory power. Method/Results The sample consists of Compustat firms that have at least ten annual earnings, accruals, operating cash flow, and sales observations from 1963 to 1992. Outliers at the (1, 99) percentile are excluded. Cash flow from operations is calculated as operating income before depreciation minus interest, tax, and non-cash working capital. Operating accruals is calculated by subtracting earnings before extraordinary items by the cash flow from operations. This includes accruals not incorporated in the model such as depreciation accruals. To test the predictive ability of earning and operating cash flows on future operating cash flows, the author calculated the mean standard deviation of forecast errors using current earnings and operating cash flows as forecasts of future cash flows. The mean standard deviations of one-year ahead (as well as two and three-year ahead) forecast errors are much lower for those using earnings than those using operating cash flows. The authors partitioned the data by the firms' operating cash cycle to find that the ability of current earnings and current cash flows to predict future cash flows is a positive function of the firm's expected operating cash cycle Likewise, using firm-specific multiple regressions of one-to-three-year ahead cash flows on current cash flows and earnings, the authors show that earnings are useful for predicting future cash flows at all horizons while cash flows are only slightly useful at the one-year-ahead horizon. To compare the actual and predicted correlations between cash flow changes, accrual changes, and earnings changes for an average firm, the authors calculated the predicted numerical values of various correlations using estimated values of the model parameters. The parameter values are based on each firm's average investments in receivables, inventories, and payables as a fraction of annual sales and net profit margin. The predicted and actual serial and cross-correlations are quite close in all of the cases except the cross-correlation between earnings changes and cash flow changes. For the contemporaneous correlation between earnings and cash flow, the predicted correlation is 0.4 versus the actual correlation of 0.15. Similarly, for the correlation between current earnings changes and future cash flow changes, the predicted correlation is 0.61 versus the actual correlation of -0.03. Despite this one shortcoming, the cross-sectional correlation between the predicted and actual correlations at the firm and portfolio levels are positively significant, which suggest that the model has some explanatory power. Contributions The main contribution of this paper is that it models the relations between earnings, accruals, and cash flow to show how accruals can offset the negative correlation in operating cash flow changes to produce earnings changes that are less serially correlated. The other contribution is that it shows earnings to be a better predictor of future operating cash flows than current operating cash flows.
Kothari and Shanken (1992)
Research Questions/Results The goal of the paper is to test whether stock returns can be explained by revisions in expectations of future dividends. The authors find that a simple model (after accounting for measurement error) could explain up to 72% of annual return variations. To deal with the problem of measurement error, the authors separated realized dividend growth into expected and unexpected components and included proxies correlated with the measurement error in the regression. The authors include dividend yield and investment growth to proxy for ex-ante anticipation of dividend growth and find an increase in R2 from 51% to 58%. Including future returns as a proxy for unanticipated future dividend growth increases the R2 from 51% to 57%. Including both ex ante proxies and future returns improves the explanatory power of the model by over 40%. Including the growth rates of industrial production does not result in much additional change to the explanatory power. Examining the results over subperiods, the authors find that revisions in expectations about future dividends is the more important driver of returns in the pre-1954 period, while contemporaneous dividend growth (GD) explains more of the return variation in the post-1953 period. Dividend yield, investment growth, and the future return variables act not only as proxies for measurement error in the post-1953 period, but also captures variation in expected returns as well as shocks to expected returns. As a response to Fama (1990), this study also examined whether the contemporaneous and future growth rates of industrial production could explain more variations in annual stock returns than revisions in expectations about future dividends. The authors used the growth rate of the Federal Reserve Board's seasonally-adjusted industrial production index as a proxy for industrial production growth. They find that the coefficients on the dividend variables do not change much with the addition of the industrial production variables. A cross-sectional analysis using portfolios formed on the basis of annual ranking of firm performance find that nearly 90% of the portfolio return variation can be explained by dividend and expected return variables. The extreme performance portfolios are accompanied by substantial dividend changes in the same direction during the return year and the following three years. Contributions This study contributes to the literature on market rationality by examining the extent to which stock return variation can be explained by changes in expected rates of return. Prior literature had looked at the ability of expected return changes to predict return, in addition to dividends, industrial production, investments, GNP, and earnings. This study in part contradicts the findings of Fama (1990), which argues that the contemporaneous and future growth rates of industrial production explains variations in annual stock returns. Instead, dividend is shown to provide a richer information set regarding future returns. Method Theoretically, R= E(R)+U(GD)+ ∑� ρj∆E(FGDj), where U(GD) is the unanticipated component of dividend growth realized at time t and ∆E(FGDj) is the change in expectations about future dividend growth in year t+j based on information that arrives during year t. In actuality, you could have R=a+a0U(GD)+ ∑3 aj∆E(FGDj) + e . This deviates from the theoretical case because changes in expected long-term dividends are omitted which could be negatively correlated with the short-term dividend changes and this causes bias in the coefficients. The authors would have to regress returns on realized contemporaneous and future dividend growths (R=a+a0 GD+ ∑3 aj(FGDj) + error, which is subject to the same errors-in-variables problem as in Collins, Kothari, Shanken, and Sloan (1994). Ideally, the independent variables should only reflect unexpected information that occurred during the return year, but in reality, regressions of returns on ex post growth rates contains information that was already anticipated. Thus, the authors had to purge the component of realized growth that was already expected at t-1 and the component of future growth rates that is unanticipated relative to expectations at the end of the return year. To do so, they used the same method in Kothari, Shanken, and Sloan (1994) to include proxies that are correlated with the measurement errors, but not with independent variables. The measurement error is the lowest when the correlation between the proxies and the measurement errors approaches one. The proxies they used for E(GD)t-1 include the dividend yield, and growth rate of private nonresidential domestic investment (GI). To proxy for unanticipated component of the future years' dividend growth, they included the returns for the three years following the return year. Dividend growth is defined as the natural log of the ratio of dividends in the current year to those of the previous year.
Gilliam and Paterson (2015)
Research question The study seeks to explain the zero-earnings discontinuity, which documents fewer observations just below zero earnings and more above. Prior studies have debated the role of earnings management versus scaling and sample selection choices. The authors document the disappearance of the zero-earnings discontinuity since the implementation of SOX in 2002, which is consistent with a decline in loss avoidance and earnings management. The evidence suggest that earnings management is what's driving the zero-earnings discontinuity and contradicts prior findings attributing the phenomenon to research design choices such as scaling, sample selection, or special items. Method The sample consists of all firms on Compustat from 1976 to 2012 with available data for net income and market value of equity, and excluding observations with zero earnings. Then splitting the sample into two sub-period, pre and post 2002, the authors look to see whether SOX caused any changes in zero-earnings discontinuity. The distribution for the pre-2002 sample clearly shows a discontinuity around zero earnings, while distribution for the post-2002 sample show a smooth bell-shaped curve. After constructing distributions of both un-scaled earnings and scaled earnings, the authors continued to find the persistent disappearance of the zero-earnings discontinuity. Sample selections issues and special items/taxes are also found to have no effect on the zero-earnings discontinuity. Partitioning firms into those with a loss in the prior year, a profit in the prior years, and profits in the most recent years find strong prevalence of zero earnings discontinuity in the pre-2002 years but little evidence of loss avoidance in the post-2002 years. The only weak evidence of post-2002 loss avoidance is among firms with three or more years of positive earnings. Contribution The main contribution of this study is in resolving the debate over the cause of the zero-earnings discontinuity effect. Some studies have attributed the zero-earnings discontinuity to earnings management, while others argue that it is caused by scaling, sample selection, or special items. This study used SOX as an exogenous shock to find that changes in earnings management caused the disappearance of the zero-earnings discontinuity. This study provide indirect evidence of a decline in earnings management after SOX. Complementing prior studies which argue that a decline in accrual earnings management is accompanied by an increase in real earnings management after SOX (Bartov and Cohen 2009, Cohen et al. 2008), this study suggest that the increase in real earnings management might be insufficient to offset the decline in accrual based earnings management.
Brown, Call, Clement and Sharp (2015)
Responding to Bradshaw's call for more insights into the black box of analyst research, this paper attempt to penetrate the black box using surveys and interviews to understand the inputs and incentives analysts face The authors looked at inputs to analysts' earnings forecasts and stock recommendations, the frequency, nature, and usefulness of their communication with management, the valuation models they employ, the value of their industry knowledge, the determinants of their compensation, the benefits of being an all-star analyst, and the factors they consider indicative of high quality earnings. Overall, they document that analysts' private communication with management is one of the most important inputs in their research. In addition, analyst's compensation is a function of their industry knowledge and is influenced by their success in generating underwriting business or trading commissions. Furthermore, analysts deem earnings to be high quality when they are accompanied by strong operating cash flows, and when they are sustainable, consistent, repeatable, and reflect economic reality. They are not as concerned about earnings management and usually do not try to focus on detecting fraud. Being an all-star analyst is not as important to their career advancement as broker votes. The authors also tried to shed light on the incentives that analysts face in for generating accurate earnings forecasts and profitable stock recommendations. They find that analysts are driven not just by the need to please management or generate underwriting business, but also by the need to satisfy their investing clients. By issuing earnings forecasts and stock recommendations below consensus, analysts could improve their credibility with their investing clients. Contributions This study contributes to the literature on the incentives and inputs that are driving the analysts' earnings forecasts and stock recommendations. Unlike prior research that could only look at the association between analyst inputs and outputs, this study is able to penetrate into the black box of analyst research to see how they actually made their decisions. They find that various analyst and brokerage characteristics can influence analysts' inputs and incentives. Consistent with Soltes (2014), the authors document that private contact between analysts and managements provides useful inputs to analysts' earnings forecasts and stock recommendations, and even more so than their own research, recent earnings performance, and recent financial reports. The authors even provide insights into the method of communication with management. The common method of communication is by private phone calls in which analysts discuss matters that they shy away from during public conference calls. These private calls could help them check the assumptions of their models, and gain access to other confidential information. This study also validated the findings in Brown (2014) that brokerages value the knowledge provided by analysts. Analyst industry knowledge is deemed the most important determinant of an analyst's compensation. Method The subjects in the surveys are sell-side analysts who published a report in Investext during the 12 month period from October 2011 to September 2012. The questions in the survey are based on feedback from academia on some of the pressing issues facing the field of analyst research. Two versions of the survey are administered, each with 14 questions on either earnings forecasts or stock recommendations. Follow-up phone interviews are conducted on a small number of analysts.
Healy and Palepu (2001)
Role of disclosure in capital markets The authors explain the role of disclosures in the capital market. Disclosure is necessary to facilitate the efficient allocation of resources in a capital market when there are information asymmetry and agency problems between the firm and providers of capital. Without disclosures (and financial/information intermediaries), the lemon problem will plague the market, eventually leading to market breakdown. The agency problem arise because managers will have an incentive to expropriate saver's funds. This can similarly be mitigated by disclosures, contracts, corporate governance, and information intermediaries. Research on the determinants of firm's financial reporting have mostly focused on examining cross-sectional variation in contracting variables. Regulation of disclosure There has been considerable debate on the economic rationale to regulate disclosures and the effectiveness of existing regulations in mitigating the information and agency problems in capital markets. One potential rationale for regulations is market imperfection and externalities (prospective investors free ride on existing shareholders leading to underproduction of information). Another explanation for regulation is the welfare of unsophisticated investors. Disclosures can be used to minimize the information gap and redistribute wealth between sophisticated and unsophisticated investors. Thus far, very little empirical studies have examined the regulation of disclosures. Regulation of financial reporting choices Regulations can reduce processing costs for financial statement by standardizing the communication process. The authors posed several questions about the regulation of financial reporting methods from the objective of standard setters to the optimal form of organization and due process for standard setting bodies to the value provided to investors. Most accounting research have focused on the third question. Role of auditors and intermediaries Auditors could provide investors with assurances over the credibility of firm's financial statement. However, very little research have been done on directly examining whether auditors truly enhance the credibility of reported financial statements. The existing evidence suggests that auditors do not provide new or timely information to investors. Financial intermediaries such as analysts also provide value for investors by making forecasts and recommendations about firm prospects. The existing evidence suggest that analysts add value in the capital markets through their forecasts (which are more accurate and timely than time-series models) and recommendations. Many studies have looked at analyst bias and other factors likely to influence their forecasts such as their experience, brokerage affiliation, innate ability, and industry specialization. Managers' reporting decisions Research on managers' reporting decisions have focused on two areas- positive accounting theory and the voluntary disclosures. Positive accounting theory focuses on manager's financial reporting choices when the markets are semi-strong efficient. The focus is on how contracting and political considerations influences manager's reporting choices given agency costs and information asymmetry. The literature on voluntary disclosures have examined the role of disclosure on the capital market. These are six potential forces that affect manager's disclosure decisions: capital market transactions, corporate control contests, stock compensation, litigation, proprietary costs, and management talent signaling. The credibility of voluntary disclosure is also a subject of much interest in the literature.
Ball (2008)
Summary In this short essay, Ball provides some directions to the future of accounting research. As indicated by the title of the article, the main research question is the actual economic role of financial reporting. Ball puts in a particular emphasis on the "actual" role to highlight the importance of what firms "actually do" rather than what they say they do. Ball believes this research question is important because financial reporting is an economic activity that is important from both a positive and normative perspective. Many in the field do not have a solid understanding of the origin and functioning of financial reporting, with some believing that financial reporting is required simply because of regulatory requirements and practical demand from the part of users. These misperceptions relegate accountants to the role of mere rule-checkers. Ball views the lack of solid understanding of the functioning of financial reporting to be a major threat to accounting. He then lists a series of unanswered questions such as "what is the relative role of markets and the political/regulatory process?" or "what are the principle supply and demand influences on financial reporting? Many of his questions are "high level" questions that looks at the field of financial accounting from a top-down view. Some of these questions may be difficult to address as they are basically "what if" questions. For example, in an unregulated world, "how would accounting standard setters such as the FASB behave"? Ball tries to find reasons for why these questions remain unaddressed. He cites the lack of a counterfactual as an important barrier to research. Some of his questions will remain unanswered because they are conjectures that we will not have data for.
Friedman (1953)
This essay advanced the line of argument by John Keynes, advocating for a positive approach to economics. The paper addressed certain methodological problems in the construction of positive economics. First, Friedman explores the relations between normative and positive economics. He believes that positive economics should be independent of any ethical or normative position. Instead of passing ethical judgements, researchers should keep a strict dichotomy between the realm of economic facts and moral judgements. Such separation is necessary since normative judgements (such as the need for minimum wage law) could hardly be agreed upon while facts and data based on positive research can. Researchers should focus on developing a theory or hypothesis that provide meaningful predictions about a phenomenon without being bogged down by differences in ethical judgements. Friedman believes that a theory should be judged by its predictive power for the phenomenon that it intended to explain. A hypothesis can be rejected only if its predictions are contracted by factual evidence rather than ethics. Factual evidence can never prove a hypothesis, but only fail to disprove it. Choosing among alternative hypotheses that explains the data, Friedman recommends choosing the simpler model that yields predictions that are more precise. Friedman's second point is that economic theories or hypothesis should be evaluated on the basis of the significance of their assumptions rather than their descriptive accuracy. Assumptions do not need to be based on reality, and in fact, they never are. The truly important hypotheses will by design have assumptions that are abstractions of reality. The more wildly inaccurate the assumptions, the more significant the theory. The important question is whether the assumptions are good approximations for its intended purpose. However, Friedman appears to be contradicting himself here. What he is implying is that a theory and its assumptions should be judged separately. Having false assumptions is fine as long as the theory itself is valid. However, theories are judged by whether they yields sufficiently accurate predictions, which unfortunately depend on the underlying assumptions used. While you cannot disprove a hypothesis purely on the basis of its assumption, if you are theories are based on the wrong assumptions will not be testing for what you have intended to test.