Rob 2017

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Bentley, Bloomfield, Bloomfield, and Lambert (2016)

- Attempt to refine the moral analysis of measure management by evaluating it with respect to the five moral values identified by Graham et al (2011) "Mapping the moral domain": - Benevolence / Harm - Fairness/Inequity - Ingroup Loyalty/Betrayal - Authority/Insubordination - Purity/Degradation In three studies totaling almost 5000 subjects, we present respondents with scenarios in which employees manage performance measures by distorting how they report performance or how they operate their organizations. We measure respondents' judgments about the scenarios and their broader moral values, and interpret statistical associations in light of Moral Foundations Theory (Graham, Nosek, et al 2011) to draw inferences on how respondents view the 'moral terrain' of morally-relevant features depicted by the scenarios we present. Along with judgments about the acceptability of reporting and operational distortion, we also ask for two additional judgments particularly relevant to distortion: how much they think the manager deserves a bonus they didn't quite earn before distorting the measure, and how well the measure captures true performance. We use survey methods rather than experiments (eliciting judgments and values, rather than manipulating them), because our goal is to capture respondent's views of the moral terrains we present to them, not to test Moral Foundations theory. Features of the moral terrain include: whether measure-based compensation is equitable; whether the measure is of closely related to the performance it is intended to capture); who is harmed or betrayed by distortion; who is owed a duty of care, or is a member of the distorter's ingroup; and what institutions and organizations must be kept pure. We use our results to propose ways to make it harder for employees to rationalize distortions, and to help anticipate when distortions might lead to more controversy than executives might expect based on their own moral reasoning (normative implications) In study 1, our scenarios depict a salesman who distorts sales in a private, for-profit business by issuing discounts (operational distortion) or reporting sales that were not complete 3 (reporting distortion). In study 2, our scenarios depict a public school principal who distorts statewide test results by directing staff to 'teach to the test (operational distortion) or selectively report only high scores (reporting distortion). In study 3, our scenarios depict a hospital director who distorts financial performance by delaying the provision of expensive patient care (operational distortion) or delaying the reporting of such care (reporting distortion). In our business, public school and hospital settings, we conclude that respondents see reporting distortion as a more appropriate remedy than operating distortion to inequity, and see operational distortions as improving underlying performance more effectively when measures capture true performance more accurately. In our public school and hospital settings, we also conclude that respondents see the organization, (i.e. school or hospital), rather than outside stakeholders, (i.e. students or patients), as representing the in-group to which managers owe loyalty. Respondents also see a sacred element both in reporting and in supporting a school or hospital.

Van Brenk and Majoor (2017)

- Examine to what extent the effectiveness of an audit quality bonus depends on the engagement pressure experienced by the auditor (an environmental moderator) and the auditor's drive (a personal moderator). - High engagement pressure- entails poorly planned and executed audit with strenuous time and budget constraints, stimulating participants to conduct additional audit procedures to improve audit quality - Result: on average, audit quality bonuses and engagement pressure have no influence on audit quality, however, after considering the auditor's drive, presence of audit quality bonus and high engagement pressure increase audit quality for auditors with low drive but decrease it for auditors with high drive. 1. Hypothesize highest audit quality when there is no audit quality bonus and engagement pressure is low, because the motivation of participants is autonomous instead of controlled in this setting. Conversely, expect a decrease in audit quality when participants are stimulated by an audit quality bonus and high engagement pressure. 2. Hypothesize that audit quality bonus and higher engagement pressure is different for auditors with low versus high drive, because different levels of motivation are likely to affect performance differently......external factors enhance audit quality for auditors with low drive (a facilitative effect) but diminish audit quality for auditors with high drive (an undermining effect). Method: 2X2X2, manipulate audit quality bonus (present or absent) and engagement pressure (low or high) and measure the participants' drive (measured using the instrument of Carver and White (1994), a four-item personality questionnaire that is widely used and validated in psychology literature, BAS Drive Scale. DV--- an overall audit quality score (multiplying the five indicator variables regarding the additional audit procedures by the expert panel's effectiveness score of each test....participants were provided with an audit finding regarding an inventory obsolescence allowance and asked to identity which of five additional audit procedures they would conduct which is then scored by an expert panel of four experienced auditors. Motivation: PCAOB (2015) propose linking compensation (audit quality bonus) to quality to motivate auditors to improve audit quality.... may create equally strong incentives for audit personnel.

Psychology Replications

A large portion of replications produced weaker evidence for the original findings despite using materials provided by the original authors, review in advance for methodological fidelity, and high statistical power to detect the original effect sizes. Moreover, correlational evidence is consistent with the conclusion that variation in the strength of initial evidence (such as P value) was more predictive of replication success than variation in the characteristics of the teams conducting the research. a. What is a replication: no consensus on the meaning of the term replication. There are different types of replication with different implications. The Clemens papers (cited in the anomaly papers) spelled out the two distinct twos of replications. One is what commonly think of as a verification. The verification and a reproduction of the prior research using the same specification, the same population, differing only in the sample that is used. The difference between a verification is that it the same sample and a reproduction uses a different sample of subject. A different type of replication that we cannot commonly refer to as replication as the Robustness tests, which can either either be a reanalysis in which you vary the specifications and variables codings, as well as choosing a different sample, or it could be an extension, in you which you vary the sample, and the population In which the sample is chosen from. The different types of replications will have different implications for interepreting the conclusion of your replications. One of the common problem in the field is that people have mixed up replication and robustness, and so when you do not find what you are looking in a robustness you claim that you could the replicated the results....... the Reproducibility Project, found no evidence of fraud or that any original study was definitively false. Rather, it concluded that the evidence for most published findings was not nearly as strong as originally claimed. Solutions to replication problems • requiring the researchers attempting to replicate the findings to collaborate closely with the original authors, asking for guidance on design, methodology and materials. Most of the replications also included more subjects than the original studies, giving them more statistical power. • The best way to do that is to make replication a requirement at the editorial stage. Research malpractices 1. trend to 'torture' the data until it confesses. Speaking differently, sometimes researchers conduct multiple tests on the data in order to find something that can be claimed to be statistically significant. Mr. Harvey has further stated: justification: "Unfortunately, our standard testing methods are often ill-equipped to answer the questions that we pose. We are not salespeople. We are scientists!" 2. What is the value of a replication? To uncover error or fraud, reduce uncertainty about findings, innovate y questioning accepted results. It's not clear which of these is valued more highly (NYT quote by Norgest Schwarz) Implications of a failed replication o Measurement error, minor good-faith oversight, to scientific misconduct with personal embarrassment and reputational cost for the author. o If a paper fails a robustness test, it is because the original paper exhibits a choice that is legitimately debatable. It is not clear what the right choice should be. The divergence of opinion here does not imply unethical and fraudulent behavior. o Robustness is a matter of what the author "ought to" have done, rather than what he did. o Confusing between the two terms and reffering to them with the same word can harm the research productivity. Reducing the original author's incentives to collaborate across bona fide disagreement in method. All papaers have legitimately debateable shortcomings, and science proceeds by collaborative discussions of better approaches. The author may be compelled to an adversial defence stance when informed of inconsistent results. Replication is a threat to the author since it implies incompetence or fraud. Misunderstood claims of failed replication in which the original researcher in fact did nothing indisputably wrong will make it much more difficult for serious policy-relevant researchers to do their job. Scholars will also be less willing to share data Anyone can find plausible ways to discredit someone else's research by changing their specficiations so that the coefficient estimates change. If you try to debunk a paper by trying a gazillion specifications and samples, you will eventually find something that changes that you use as proof of replication failure. Failed replication attract more and faster attention than successful ones, the proteus pheonomeon documented by Ioannidis and Trikalinos (2005), with obvious perverse incentives for those seeking academic or public notice. The solution to this problem is to come up with a clear terminology that distinguish between replication and robustness. Reanalysis- altering the computer code from the original study. It is exclusively a reanalysis if it uses exactly the same dataset or a new sample representative of the same population. This includes new regression specifications or variable coding. Extension test- using new data gathered on a sample representative of a different population, or gathered on the same sample at a substantially different time, or both. This includes dropping influential observations, since a truncated sample cannot represent the same population. It is exclusively an extension test if it runs identical computer code on the new data. Both forms of robustness test estimate population parameters that are different from those in the original study, thus they need not give identical results in expectation. Many robustness tests are a mix of reanalysis and extension. Results discrepant from the original should not be described as a replication issue. There is no reasons these tests should be the same. These are all testing quantitatively different hypotheses than the original study tested, because they all change the sampling distribution for the parameter of interest. The right interpretation is that the original study is not robust to reanalysis with new covariates, or not robust to extending the data to a different country/sample. These do not necessarily have negative implications.

Brown, Call, Clement, Sharp (2017)

Abstract: Investor relations officers (IROs) play a central role in corporate communications with Wall Street. We survey 610 investor relations officers at publicly traded U.S. companies and conduct 14 follow-up interviews to gain insights into the nature of their interactions with sellside analysts and institutional investors, and to deepen our understanding of the role of investor relations officers in corporate disclosure events. We find that public earnings conference calls are the most important venue for management to convey their company's message to institutional investors, and that preparing for and managing these calls are the most important determinants of IRO job performance. IROs indicate that private phone calls are more important than 10-K/10-Q reports, on-site visits, and management guidance for conveying their company's message, and more than 80% of IROs report that they conduct private "call-backs" with sell-side analysts and institutional investors following public earnings conference calls. We explore numerous topics that IROs are uniquely suited to address, and we provide insights into the investor relations, analyst, institutional investor, and disclosure literatures.

Hail, Tahoun, and Wang (2017, JAR)

Are regulatory interventions delayed reactions to market failures or can regulators proactively pre-empt corporate misbehavior? From a public interest view, we would expect "effective" regulation to ex ante mitigate agency conflicts between corporate insiders and outsiders, and prevent corporate misbehavior from occurring or quickly rectify transgressions. However, regulators are also self-interested and may be captured, uninformed, or ideological, and become less effective as a result. We develop a historical time series of corporate (accounting) scandals and (accounting) regulations for a panel of 26 countries from 1800 to 2015. An analysis of the lead-lag relations at both the global and individual country level yields the following insights: (i) Corporate scandals are an antecedent to regulation over long stretches of time, suggesting that regulators are typically less flexible and informed than firms. (ii) Regulation is positively related to the incidence of future scandals, suggesting that regulators are not fully effective, that explicit rules are required to identify scandalous corporate actions, or that new regulations have unintended consequences. (iii) There exist systematic differences in these lead-lag relations across countries and over time suggesting that the effectiveness of regulation is shaped by fundamental country characteristics like market development and legal tradition. To test these ideas, we develop a time series comprising episodes of corporate scandals and regulation for a global sample of 26 countries over the 1800 to 2015 period. We conduct the data collection in two steps. First, we use a coarse proxy of the underlying constructs and collect the yearly number of incidents the terms "scandal" and "regulator" are mentioned in the leading (business) newspaper in each country. In a second step, we refine our search and out of all the articles identify those that cover actual accounting scandals (i.e., financial reporting behavior by firms that is deemed morally or legally wrong and causes public uproar. We collect data on accounting regulation from secondary literature and (official) depositories of the laws, rules, and voluntary conventions that cover financial reporting in a country. We use this information to create a country panel comprising the yearly number of accounting scandals and regulations we identified in the press and from the other sources. This large panel of historical data allows us to examine the lead-lag relations between corporate scandals and regulations, and from the observed correlations Following Reinhart and Rogoff (2011), we test whether one time series is useful in forecasting the other after controlling for its own lagged values (Granger causality test). We find that the media mentions of both the terms "scandal" and "regulator" exhibit high positive autocorrelation, indicating that past mentions in the press are related to future mentions of the same term. Except for the most recent sample period (after 1970), the mentions of "scandal" lead the mentions of "regulator." At the same time, the opposite relation holds, and mentions of "regulator" lead mentions of "scandal." We find similar albeit statistically stronger patterns for actual episodes of corporate scandals and regulation. Both events exhibit a strong positive serial correlation.

Nelson, Elliott, and Tarpley (2002)

Earnings management, accounting standards, survey This paper reports analyses of data obtained using a field-based questionnaire in which 253 auditors from one Big 5 firm recalled and described 515 specific experiences they had with clients who they believe were attempting to manage earnings. This approach enables us to analyze separately man- agers' decisions about how to attempt earnings management and auditors' decisions about whether to prevent earnings management by requiring adjustment of the financial statements. Our results indicate that managers are more likely to attempt earnings management, and auditors are less likely to adjust earnings management attempts, which are structured (not structured) with respect to precise (imprecise) standards. We also find that managers are more likely to make attempts that increase current-year income, but auditors are more likely to require that those attempts be adjusted, that managers are more likely to make attempts that decrease current-year income with unstructured transactions and/or when standards are imprecise, and that auditors are more likely to require adjustment of attempts that they identify as material or that are attempted by small clients. Contribution: Unlike studies that focus on only post- audit information, we consider separately managers' decisions about how to attempt earnings management and auditors' decisions about whether to require adjustments. Thus, our study provides evidence about how a key feature of accounting standards (precision of rules) and a key feature of the financial reporting process (activity of external auditors) influence earnings management. Results of descriptive analyses indicate that the most frequent earnings management attempts in our sample involves reserves. Respondents believe that managers' attempts were motivated by a variety of incentives, including the need to meet analysts' estimates and influence the stock market, to reach targets set by compensation contracts or debt covenants, to communicate economic information to stakeholders, and to smooth income or improve future income, as well as by combinations of these incentives. Transaction structuring often requires transaction fees, fees paid to experts for their advice, and/or modification of operational decisions that presumably were optimized prior to the structuring. Therefore, managers should engage in transaction structuring only when its anticipated financial reporting benefits exceed its anticipated out-of-pocket and opera- tional costs. Because the benefits of transaction structuring are less certain. When rules are precise, we predict managers are less likely to structure transactions and more likely to attempt to convince the auditor that they have interpreted the rules correctly.

Blankespoor, Hendricks, and Miller (2017)

Examines the relation between cognitive perceptions of management and firm valuation. They develop a composite measure of investor perception using 30-second content-filtered video clips of initial public offering (IPO) roadshow presentations. They show that this measure, designed to capture viewers' overall perceptions of a CEO, is positively associated with pricing at all stages of the IPO (proposed price, offer price, and end of first day of trading). The result is robust to controls for traditional determinants of firm value. We also show that firms with highly perceived management are more likely to be matched to high-quality underwriters. In further exploratory analyses, we find the impact is greater for firms with more uncertain language in their written S-1. Taken together, our results provide evidence that investors' instinctive perceptions of management are incorporated into their assessments of firm value. We create a measure of perception using a "thin-slice" approach common in social vision research. Specifically, we ask viewers to provide their perceptions of CEOs after watching 30-second video clips of a CEO's initial public offering (IPO) roadshow presentation with verbal content filtered out. This filtering isolates the nonverbal visual and auditory signals that determine rapidly formed perceptions. Consistent with our prediction, we find a positive association between cognitive perceptions of management and measures of firm value throughout the IPO process. Contribution: Our work builds on a body of research that shows investors find value in meeting with management. Surveys of investor relations firms and of analysts show direct interactions with management are highly sought after (Bushee and Miller [2012], Brown et al. [2015]). Empirical studies confirm the value of such meetings for analysts and investors (Green et al. [2014], Soltes [2014], Bushee, Jung, and Miller [2016]). There is also evidence of a capital market response to managers' affect as revealed by vocal cues during conference calls (Mayew and Venkatachalam [2012]). Specific to our setting, Ann Sherman testified to the U.S. Senate in 2012 that investors primarily attend IPO roadshows "to get a feel for [management], because [investors] are not just investing in the idea or the product; [investors] are investing or betting on the management team" (Sherman [2012]). We combine this evidence with the psychology literature's documentation of individuals forming intuitive perceptions to argue that investors form Our empirical analysis begins by examining the association between basic perceptions of management and firm valuation for a sample of 224 U.S. IPOs filed from 2011 through 2013. We estimate investors' perception of management using naıve participants who view 30-second content-filtered slices of CEOs' roadshow presentations. To develop a rich, robust measure of perception, we ask participants to assess each CEO's competence, trustworthiness, and attractiveness on a seven-point Likert scale. These are classic traits examined in the psychology and economics literature. We select these traits because they are characteristics investors are likely to naturally incorporate when perceiving management. Our goal is to prompt raters to consider perception from various angles to make sure that idiosyncratic interpretations of a single characteristic or its description do not skew our results. We focus our analysis on overall perception, which is created by combining these three attributes to provide a composite measure of perception. Each video clip is rated by at least 40 participants. We calculate mean ratings of CEO-specific perceptions of competence, trustworthiness, and attractiveness, and then average the characteristics for our summary CEO-specific measure of perception.

Hou et al (2017)

Excluded smallcap stocks that are highly correlated with liquidity and thus the p-hacking appear to be the worst for liquidity anomalies. Anomaly papers, like other empirical research, should be based on sound economic theory. To the extent that we ignore theory, we can run into issues with data-mining and p-hacking. Paper summary: This paper is the largest up to date replication of anomalies in the field of finance. The authors looked at 447 anomalies recorded in the finance literature and find that the majority of them (or 286 (64%) of them) do not replicate. Specifically, out of these 286 that do not replicate, 20 from momentum, 37 from value-versus-growth, 11 from investment, 46 from profitability, 77 from intangibles, 95 from liquidities category. Of these categories, the liquidities variables suffers the highest casualty of which 93% are insignificant. This is similar for the distress anomalies of which almost all of the variables are insignificant. The authors defined an anomaly as significant if for the average return of its highest minus lowest decile is significant at the 5% level. Further imposing a t-value cutoff 3 would increase the proportion of insignificant anomalies to 85%. For their testing procedures, they used a consistent set of procedures throughout and it's a standardized procedures for all of the 447 anomalies, without any consultation with the authors on the original methods that were used, so their replication is more of a robustness and extension than an actual replication. To test whether an anomaly can forecast future returns, they formed testing deciles using both annual and monthly sorted portfolios with NYSE breakpoints and value-weighted returns. This is different from most of these prior studies which had used NYSE-Amex-NASDAQ breakpoints, and equal weighted returns. The benefit of the using this approach is that it would eliminate almost all of the microcap stocks, which the authors believing to be driving force behind most of these papers. He argues that microcap stocks should not be included because they make up only 3% of the market value but account for 60% of the number of stocks. Due to high transaction cost, anomalies using microcaps are going to be difficult to exploit in practice and are therefore economically insignificant. They argued against using Fama-Beth cross-sectional regressions because the regression is linear in functional form and it's sensitive to outliers, which in this case to be the microcap stocks. The cross-sectional regression framework also provides researchers with too much flexibility and encourage them to engage in p-hacking. For their sample period, the authors did not use the original sample period of these prior studies. Instead, they used the sample period up to the present for their main analysis and therefore it can argued that the weaker results could be because of arbitrary post-publication, which has been documented by a couple of papers. However, in their robustness section, they did use the author's original sample, and surprisingly they found the number of insignificant anomalies to go up and not down. In further robustness checks, they find most of these insignificance are due to the exclusion of the microcap stocks. In their conclusion, the argued that the anomalies literature is infested with widespread p-hacking and that to move forward in the anomalies literature, there should be a greater emphasis on theory.

Experimental Finance

Experimentalists can classify models as having three types of assumptions: structural assumptions describe the institutions in which agents interact, including the distribution of information, possible actions, and incentives; behavioral assumptions characterize agents' preferences and decision-making abilities (such as expected utility maximization and the form of the utility function); and equilibrium assumptions that describe the solution concepts used to predict behavior (such as Bayesian Nash Equilibrium, rational expectations, or arbitrage free pricing). An experiment can assume an assumption is true, test its validity, impose it without manipulating it, control and manipulate it, or ignore it. Use specific studies to illustrate the strengths and weaknesses of various approaches.

Kadous and Zhou (2017)

Fair value, intrinsic motivation, improving auditor decision making - Examine whether and how auditors' intrinsic motivation for their work can be harnessed to improve auditors' information processing and their ultimate judgments in auditing fair value - Hypothesize that auditors whose intrinsic motivation is salient will attend to a broader set of information cues, process them more deeply, and consider more relevant evidence before reaching a conclusion. - we adapt an intervention from prior research that has participants rank order a list of possible intrinsic motivators for their job (e.g., Amabile 1985). The intervention is designed to increase the salience of auditors' intrinsic motivation for their jobs relative to that in the control condition, in which auditors are asked to rank order a list of factors unrelated to job motivation. Results: We find that auditors whose intrinsic motivation is salient exhibit superior cognitive processing. They identify more seeded issues with the fair value, identify more issues requiring deep processing, and request more relevant additional audit evidence than do auditors in the control condition, but they do not identify more false-positive cues than auditors in the control condition. These auditors also assess the client's biased fair value as less reasonable and are more likely to consult immediately with their supervisor about the estimate than are auditors in the control condition, though the latter effect is marginally significant. Structural equations model (SEM) analyses confirm that auditors' more conservative judgments are driven by the specific improvements in cognitive processing that our theory identifies. We further show that, consistent with theory, a parallel manipulation of the salience of auditors' extrinsic motivation does not improve the cognitive processes that we examine. This suggests that salient intrinsic motivation provides unique benefits in improving auditors' judgment and decision making in complex judgment tasks Contribution Limited prior evidence on whether the link is causal and how intrinsic motivation improves task performance The improvement in cognitive processing address the specific shortcomings underlying deficiencies in auditor judgements about complex estimates (Griffith, Hammersley, and Kadous 2015a, PCAOB 2016), and we further predict that they will benefit auditors' judgments in complex audit tasks, including their judgments about complex estimates. we conduct an experiment in which 95 senior-level auditors are tasked with auditing a client's step-one analysis of a goodwill impairment test. The client concluded that the estimated fair value of its business unit exceeded the carrying value, and thus the step-one test was satisfied. However, the case contains seeded cues that indicate that the fair value is overstated and the step-one analysis is biased. Relying on the assumption that auditors have intrinsic motivation for their jobs but that this intrinsic motivation is not always salient,

Eyring and Narayanan (2017)

Field Experiment We test performance effects of setting a relatively high reference point for peer-performance comparison. To set such a reference point, we show some online students their performance compared to the top quartile as opposed to the median. This yields a performance effect that is concave in initial performance. The effect is negative for students below the median, and positive for students between the median and top quartile. For students above the top quartile, the effect is positive when reported performance is an outcome measure—grade—but not a process measure—aggregate activity in the course. Consistent with measure type moderating the effect of a high reference point in the upper range of performance, our survey evidence shows that interest in outperforming peers persists (diminishes) at high levels of grade (aggregate activity). Neither tested reference point has a greater performance effect on average. However, showing the most effective of the two reference points in each partition of initial performance yields a 30- 40% greater performance effect than showing either reference point uniformly. Our study provides some of the first evidence of the effect of providing alternatively high reference points within RPI. We also show how the effect depends on an individual's initial performance relative to the two alternatives. Further, we address the moderating role of performance-measure type by testing a process-based and an outcome-based performance measure. We find that the effect of providing a relatively high reference point in RPI depends on one's initial performance relative to the alternatives. We test the peer top-quartile and the peer median. The effect of providing the higher rather than the lower is concave in initial performance. The effect is negative among below-median performers. The effect is positive among above average performers. In the case of an outcome-based performance measure, it is also positive for those in the top-quartile of performance. Collectively, our findings inform the selection of a reference point to drive performance in the desired partition of initial performance. The findings also suggest that, when reports are private as in our setting, customizing the reference point based on an individual's initial performance is preferable. Managers and government regulators can incorporate these results when selecting RPI reference points to yield desired behavior. RPI reference points are playing a growing role in settings including retail, education, energy consumption and taxpaying. The results also reveal dynamics of social comparison that help in identifying their optimal application within oftstudied systems for measuring, managing, and reporting performance. Contribution Show effects of RPI reference point height that operate through the private display of anonymous performance information. This speaks to the growing body of economic, psychology, accounting, and management research on such performance information display as a tool for influencing performance and behavior 2) Contributes to the substantial amount of accounting research that shows the format of information display influences decisions ranging from stock trading to assigning employee bonuses (Bloomfield, Nelson, and Smith 2006). The study extends this research by showing that the height of the reference point included in RPI influences performance.

Hall, Mikes, and Millo (2015)

Field Study This paper, based on a five-year longitudinal study at two UK-based banks, documents and analyzes the practices used by risk managers as they interact and communicate with managers in their organizations. Specifically, we examine how risk managers (1) establish and maintain interpersonal connections with decision makers; and how they (2) adopt, deploy and reconfigure tools—practices that we define collectively as toolmaking. Using prior literature and our empirical observations, we distinguish between activities to which toolmaking was not central, and those to which toolmaking was important. Our study contributes to the accounting and management literature by highlighting the central role of toolmaking in explaining how functional experts may compete for the attention of decision makers in the intra-organizational marketplace for managerially relevant information. Specifically, as risk management becomes more tool-driven and toolmaking may become more prevalent, our study provides a more nuanced understanding of the nature and consequences of risk management in contemporary organizations. An explicit focus on toolmaking extends accounting research that has hitherto focused attention on the structural arrangements and interpersonal connections when explaining how functional experts can become influential. Contributions • In particular, our study addresses directly this gap in the current accounting and risk-control research by showing that toolmaking is central to risk managers' interactions with other managers. • provide a detailed empirical account of an important, but hitherto understudied part of organiza-tional decision making: how risk managers incorporate their expertise into the routines and practices according to which decisions in financial institutions are being made. • assessment of the effectiveness of risk management would benefit from a better understanding of how risk managers become involved in, and potentially have an impact on, decision-making processes in financial institutions. • Our focus on toolmaking provides a further perspective on how tools can be studied in organizations because it focuses attention on the development and on-going adaptation of tools, how this process interacts with the expertise of the functional expert and the business managers, and its links with the ways in which functional experts can become influential in organizations. • Our focus on examining the specific ways in which risk managers operate in organizations also resonates with calls to move beyond standardized risk management approaches to uncover the potential for more fine-tuned and creative approaches to risk management • Examining in detail the dynamics between functional experts and managers.

Bentley, Bloomfield, Davidai, and Ferguson (2017)

Financial advisors frequently have an interest in the investments their clients make, forcing clients to distinguish the part of their advisor's recommendation that reflects unbiased beliefs from the parts that reflect self-interested bias. This task is complicated by the tendency of advisors to 'drink their own Kool-Aid', making some of their bias a sincerely held belief. In our first experiment, we show that clients make such distinctions more accurately when they meet the advisor face-to-face to discuss the recommendation, compared to when they receive only a written recommendation. In our second experiment, we show that clients make such distinctions more accurately when advisors are asked to provide factual information about their own actions and clients are encouraged to focus on factual information. Our results underscore the importance of post-report interactions and the value of training to request, discern, and focus on advisors' factual claims. Research Question: we examine when and how clients can use post-report interactions with their advisors to better distinguish between unbiased, sincerely biased, and insincerely biased advice. Results: In experiment one we find that clients are better able to distinguish between unbiased, sincerely biased, and insincerely biased recommendations when they meet with advisors face-to-face than when they do not. In experiment two, we find that they are better able to distinguish when the advisor is asked a factual question about his own prior actions and the client is trained to be sensitive to factual statements. In both experiments, we find strong evidence that advisors "drink the Kool-Aid", altering their own personal beliefs to be more consistent with their persuasion goal, and our second experiment shows that this self-deception helps advisors sway their clients, though only when clients are provided with factual information about the advisors' own choices but lack the training that emphasizes the importance of that information. In this paper, we conduct two experiments to clarify when post-report interactions help clients discriminate between the unbiased and biased parts of an advisor's potentially self-interested report. Our investigation is shaped by the likelihood that advisors' own beliefs are biased towards their directional goal. Psychologists call such behavior 'motivated reasoning' (e.g. Kunda 1990) or 'self-deception' (Trivers 1976/2006), while practitioners often call it 'drinking your own Kool-Aid'. Kool-Aid drinking by an advisor complicates the client's task, because sincerity is typically a marker of honesty. Even if clients can identify the insincerely biased part of a report, they may still rely inappropriately on the sincerely biased part. In our first experiment, we manipulate post-report interaction by having 33 of 66 client-advisor pairs meet face-to-face after the recommendation is delivered. In the other 33 pairs, the advisor gives the report to the client and leaves without any discussion. To measure how much this meeting helps clients identify bias, we use the advisors' two choices on their own behalf to decompose the recommendation into three parts. The unbiased part is measured by how advisors invest on their own behalf before they learn which investment offers a commission. The sincerely biased part is measured by how advisors change their investment on their own behalf after they learn which investment offers commissions. This revision is likely biased by this knowledge, but is sincere because it reflects a choice made on their own behalf. The insincerely biased part is simply the remainder of the recommendation after subtracting the first two parts

Allee, DeAngelis, and Moon (2017)

Prior research examines the readability of disclosures for managers, investors, and regulators with a focus on the level of effort required for a person to read and understand the disclosure (Li 2008). This paper introduce a new dimension of information processing costs in capital markets by examining the scriptability of firm disclosures, or the relative ease with which a computer program or computer programmer can transform a large amounts of unstructured data in firm disclosures into usable information (Bloomfield 2002). They validate their measure using SEC filing-derived data from prior research and identity firm and disclosure characteristics related to it. They find some evidence that the speed of the market response to filings increases with scriptability, but find little evidence that scriptability affects the incidence and speed of news dissemination by Dow Jones. They identify two basic tasks that a scripter of firm filings is likely to perform, identifying data of interest and processing that data into decision-relevant information, and measure characteristics of filings that are likely to facilitate or inhibit the automation of these tasks. They validate their measure using a series of tests relying on researcher-derived samples and measures commonly used in prior literature. They also assess whether their measure is associated with a set of firm and disclosure characteristics that they expect to relate to scriptability and find some limited evidence from these tests that investment in the financial reporting function is related to higher scriptability, although characteristics such as technological sophistication, firm size, or age do not appear to translate into more scriptable filings. Overall, the most significant predictor of filing scriptability appears to be preparation by a top filing agent. IN their primary tests, they provide some evidence that scriptablity relates positively to measures of the speed of market response to SEC filings. We also predict that scriptability increases the likelihood and speed of market response to SEC filings, but find limited evidence consistent with this prediction. Contribution: the analyses significantly broadens the scope of the literature that suggests lower information processing costs facilitate efficient capital allocation. To measure scriptabilty, the authors identify two basic task that a scripter is likely to perform and measure characteristics of SEC filings that facilitate or inhibit the automation of these tasks. For the first task, they measure the ease of programmatically searching through the filing to identify data of interest such as MD&A section of 10-K, by focusing on the organizational attributes of the disclosure. For the second task, they measure aspects of the filing that facilitate or interfere with data processing. Examples include the presence of binary data (PDF or image formats) instead of text, the quality of tokens, use of non-ASCII encoding, and the quality and consistency of table formatting. Three validation tests are done to verify that the measures of scriptability capture computerized information processing costs of firm disclosures. First, they expect low scriptability to increase the likelihood of filing's exclusion from academic samples. Second, they expect poor scriptability to increase noise in programmatically-derived disclosure measures, so they measure how the standard deviation of two common linguistic measure (tone and readability) varies across quantiles of their scriptability measure. Finally, they posit that disclosures with low scriptablity require more sophisticated scripting in order to accurately extract data.

Bloomfield (1996)

Prior studies like Lundholm (1991) look at factors that influence market efficiency but do not incorporate the strategic nature of managers' decisions. This paper first looks at a setting with no managerial discretion, and confirms that the market perfectly reacts to public information, but underreacts to private information held by only a few traders (although to a lesser extent as more private information is held by more traders). When managers have reporting discretion, they engage in costly upward manipulation of public signals, especially when markets are less efficient and thus more likely to fully react to public signals but underreact to private signals. However, investors "undo" this upward manipulation, so that inflation of the public signal does not increase market prices. Furthermore, investors OVERestimate the inflation of the public signal, so that they actually underreact to the public signal when managers have discretion. In sum, managerial discretion forces managers to engage in costly inflation of a public signal, but with no net benefit because investors "undo" the manipulation. Thus managers may actually prefer not to have discretion, as it acts as a credible commitment that they will not inflate signals. Bloomfield argues that the demonstrated underreactions to private signals are consistent with findings from papers like Ou and Penman (1989) or Sloan (1996) which focus on less-salient disclosures, while underreactions to public signals are similar to the underreactions to salient earnings documented by Bernard and Thomas (1989, 1990). • This study examines how investors react to manager reporting discretion. The paper hypothesizes that managers will not be able to credibly commit to truthfulness in their reports; thus, investors will always react as if they have manipulated the signals provided. This will create a negative association between price errors and the public signals, since investors will believe high signals are a result of manipulation, not truthful reporting. The paper provides evidence for this hypothesis (using a 2 X 2 design manipulating availability of signals and management disclosure discretion), showing that markets react less strongly to public signals when discretion is available (tested through a paired regression). The paper also finds evidence of managerial "smoothing" of the public signal. The paper notes that this circumstance creates a sort of Prisoner's Dilemma, where managers are actually better off without reporting discretion. • Managers make favorable information available to more investors in less efficient markets but not in more efficient markets • 2 x 2 design manipulating (1) availability of private signals to investors (High-Availability vs. Low Availability) and (2) existence of managerial reporting discretion (No discretion vs Discretion) • Security has a value equal to 100 + sum of five random numbers and dependent variable is price error (P-V)

Michels (2017)

Research question: whether investors value recognition more than disclosure? Provide evidence on the differential effect of requiring disclosure as opposed to recognition by exploiting the required accounting treatment for subsequent events. Problem: • Investors may rely more heavily on recognized values because of differences in the characteristics of transactions that are recognized rather than disclosed, or because of a change in the perceived importance of the reporting item when regulators require recognition of a previously disclosed item. • Discerning whether requiring recognition alone results in a stronger market reaction to an accounting item is difficult, as accounting standards typically require similar transactions to be uniformly recognized or disclosed (very little variation for similar items being recognized vs disclosed), either across or within firms. • Self-election of firms choosing to disclose or recognize make it difficult to reach definitive conlcusions regarding the causal effects of the disclosure and recognition. • The two groups of firms are experiencing economic losses of different magnitudes, and any difference in the market reaction is driven by this difference in the events rather than a difference in the required accounting treatments. Identification The required accounting treatment is not determined by informational properties such as reliability, but rather by the timing of a natural disaster. In this setting, the disclosed and recognized items are therefore not a priori distinct in their information content. Main summary • this paper provide evidence on the differential effect of requiring disclosure as opposed to recognition by exploiting the required accounting treatment for subsequent events. A subsequent event is an event occurring after a firm's balance sheet date but before the firm issues its financial statements. Contribution • Addresses several challenges to causal inferences typically encountered in studies of disclosure versus recognition (very little variation in disclosure and recognition for comparable transactions). • Provide evidence on how investors respond to required disclosure literature by providing evidence on how investors respond to required disclosure relative to required recognition in a setting where the transactions are similar, the regulatory regime is held constant, and concerns of self-selection are minimized. Is this true? • I believe this paper have a unique contribution in addressing challenges to casual inferences in studies of disclosure versus recognition. Using a clever setting, the authors are able to reduce, though not eliminate the self-selection and endogeneity issues present in the study. The author is very clear in outlining the issues that this paper address and those issues that remain, which provides future studies with the opportunity to further address these challenges.

Conceptual Framework

The Preface to each FASB Concepts Statement includes a description that is the same as, or similar to, the following taken from Concepts Statement 6: The conceptual framework is a coherent system of interrelated objectives and fundamentals that is expected to lead to consistent standards and that prescribes the nature, function, and limits of financial accounting and reporting. It is expected to serve the public interest by providing structure and direction to financial accounting and reporting to facilitate the provision of evenhanded financial and related information that helps promote the efficient allocation of scarce resources in the economy and society, including assisting capital and other markets to function efficiently. Establishment of objectives and identification of fundamental concepts will not directly solve financial accounting and reporting problems. Rather, objectives give direction, and concepts are tools for solving problems.

Bauer (2015)

The authors draw on Social Identity Theory to test whether auditors with a stronger client identify will agree more with the client, and whether this effect is mitigated when professional identity salience is heightened. They also test whether heightened professional identity salience will increase attitudes related to the core professional value of the professional skepticism. The results show that auditors with a stronger client identity agree more with the client's preference and assess a higher likelihood that the client will continue as a going concern (Experiment 1) and are willing to accept a smaller asset write-down (Experiment 2) when professional identity is not heightened. Auditors with strong client identity will agree less with the client when professional skepticism is heightened. Demonstrate two effective mechanisms to heighten professional identity salience: visual cues related to the profession, logos displayed on professional publication, 2) completing a mind map on professional value. Implication: heightening professional identity salience can successfully mitigate the effects of a strong client identity by increasing auditors' professional skepticism. Regulators should consider implementing mechanisms that heighten professional identity salience, as the mechanism can be regardless of short or long auditor tenure or audit firm rotation. I consider whether arousing an auditor's identity as a professional, by increasing its salience, promotes auditor independence in mind. Specifically, using Social Identity Theory (SIT), I predict that an auditor with a stronger client identity will agree more with the client (Bamber and Iyer 2007), but that this effect will diminish when the auditor's professional identity salience is heightened. SIT defines identity strength as the extent of overlap between an individual's and identifying group's norms and values, and defines identity salience as how aroused the identity is at a given moment relative to other identities (Forehand, Deshpande, and Reed 2002; LeBoeuf, Shafir, and Bayuk 2010). Client (professional) identity reflects auditors' shared norms, values, and attributes with the audit client (accounting profession). I expect the auditors' professional identity salience to reduce the independence threat of a strong client identity because SIT predicts auditors will be less influenced by client identity strength when the salience of another identity, their professional identity, is heightened (LeBoeuf et al. 2010). To empirically demonstrate independence threats from short auditor tenure, which prior accounting research largely ignores, I conduct two experiments using experienced auditors as participants in a task setting with no prior auditor-client history. I empirically test my prediction that heightened professional identity salience mitigates the propensity for auditors with a stronger client identity to agree more with the client. In Experiment Two, I also test the prediction that increasing professional identity salience, which I expect to arouse norms and values of the professional identity, will increase auditors' professional skepticism. Experiment 1: employs a going concern setting. Manipulate client identity strength as either weaker or stronger by characterizing the client's image related to its corporate social responsibility as worse or better. Experiment 2: employ an inventory write-down judgement that unlike the going concern judgment in Experiment 1, cannot be plausibly related to client CSR choices reflecting different strategic decisions. CI strength is manipulated solely based on client image related to CSR activities to more directly test overlap of auditor-client values.

The Objective of Financial Reporting

This document broadly defines the objectives of financial reporting. It states that financial statements are primarily provided for capital providers, defines relevance (predictive & confirmatory value) and faithful representation (complete, neutral and free from material error) as fundamental characteristics, highlights comparability, verifiability, timeliness and understandability as enhancing characteristics, and recognizes materiality and costs (the cost-benefit analysis) as constraints on financial reporting

SFAC 6

This is the financial accounting concept statement which defines the elements of financial statements. The statement highlights decision usefulness, focuses on assets & liabilities, defines assets & liabilities, broadens the concept of "comprehensive income", and defines accrual accounting.

Kothari, Ramanna, and Skinner (2010)

This paper examines how GAAP should look under an economic objective to allocate capital within an economy. The paper provides some evidence that modern accounting has been formed based on the demand for performance measurement and control (a contracting perspective), with equity valuation a secondary objective. This view highlights the importance of verifiability and conservatism in modern accounting. Other major points include a caution against the wide use of fair value, three theories of regulation (public interest, capture and ideology), externalities as the drivers of "market failure" for regulating GAAP, competition between the IASB and FASB as an optimal regime-setting outcome, and the "funnel" theory of principles-based regulation.

Bratton (2006)

This paper examines the structure of the FASB and compares it to other possible forms of standard setting. The paper has three sections. The first describes how the current FASB structure is somewhat unique: it is a private body which is funded by the state. The FASB attempts to provide useful standards subject to veto power by political entities, and takes private preferences into account without permitting them to determine results. The second section describes how the FASB embraced "decision usefulness", a concept which champions user interests over all others, which increases policy legitimacy but lead to fights with managers. In addition, the mission is useful to the SEC which can essentially "outsource" policy decisions and focus on enforcement while still maintaining control. The third section discusses rules and principles, noting that rules narrow discretion but prevent the appearance of first-best reports. It also notes that rules seem to be an inevitable consequence of standards in the real world.

Bloomfield (2002)

This paper lays out an alternative to the efficient market hypothesis called the "incomplete revelation hypothesis". The model is based on outcomes in "noisy rational expectations" models, where noise keeps prices in the market from revealing complete information. It provides the same qualitative predictions as the EMH (efficient markets hypothesis) model, but it assumes that the cost of extracting useful statistics from public data prohibits markets from fully revealing the meaning of those statistics. Essentially, the model is the EMH with room for irrationality through higher processing costs and underreaction to information. The Efficient Markets Hypothesis of Fama (1970) states that markets should fully reflect all publicly information, implying that efforts to identify and trade mispriced stocks are wasted and the transparency of financial statement information should not affect the degree to which it is reflected in the market price. While evidence existed that markets weren't perfectly efficient, there was no alternative to EMH in place to account for observed discrepancies. This paper introduces an alternative called the "Incomplete Revelation Hypothesis" which posits that information that is more costly to extract (both in real money and costly time) is less likely to be completely revealed in market prices. The IRH builds on the noisy rational expectations models (like Grossman and Stiglitz, 1980) to suggest that, in equilibrium, traders collect just enough information to make their trading gains against noisy traders equal to their collection costs. He distinguishes between "data", which is simple numbers and easy to extract, and "statistics" which are the meaningful facts that can be extracted from exert the effort to understand the data. Statistics drive more trading interest and are more completely revealed by market prices (1) if more traders collect it, (2) if those who collect it have more money to put at risk, and (3) if those who collect it have a greater tolerance for risk. The IRH can potentially account for things like the accruals anomaly (Sloan, 1996) and Post-Earnings Announcement Drift (Ball and Brown, 1968; Bernard & Thomas, 1990). More importantly for its acceptance, the IRH clarifies that informational inefficiency in the market DOES NOT REQUIRE INVESTOR IRRATIONALITY (although noise traders may still be irrational).

Jin, Luca, and Martin (2016)

This paper uses laboratory experiments to directly test a central prediction of disclosure theory: that market forces can lead businesses to voluntarily provide information about the quality of their products. This theoretical prediction is based on unraveling arguments, which require that consumers hold correct beliefs about non-disclosed information. Instead, we find that receivers are insufficiently skeptical about nondisclosed information, and as a consequence, senders do not always disclose their private information. However, when subjects are informed about non-disclosed information after each round, behavior slowly converges to full unraveling. This convergence appears to be driven by an asymmetric response in receiver actions after learning that they were profitably deceived. Despite the change in receiver behavior, stated beliefs about sender strategies remain insufficiently skeptical, which suggests that while direct and immediate feedback induces equilibrium behavior, it does not reduce strategic naïveté. Contribution: A central tenet of the economics of information is the idea that market forces can drive firms to voluntarily and completely disclose such information, as long as the information is verifiable and the costs of disclosure are small (Viscusi 1978, Grossman and Hart 1980, Grossman 1981, Milgrom 1981). 2) Market efficiency: these findings suggest that market forces are insufficient to close the information gap between sellers and buyers, unless buyers receive fast and precise feedback about mistakes after each transaction. In these situations, mandatory disclosure may be necessary if the policy goal is complete disclosure. Our aim is to investigate the unraveling predictions using lab experiments that are complex enough to capture the main strategic tensions of the theory yet simple enough for subjects to easily understand the structure of the game. In our experiments, there are two players: an information sender (e.g., the firm) and an information receiver (e.g., the consumer). The sender receives private information that perfectly identifies the true state (e.g., the firm's true quality level). The sender then makes a single decision: whether or not to disclose this information to the receiver. As a result, the sender cannot misrepresent the state. By prohibiting dishonest reporting, we reproduce the assumptions underlying the unraveling prediction and mirror an important feature of many markets, such as those with truth-in-advertising laws. Based on the choices of 422 experimental subjects, we find a fundamental breakdown in the logic of unraveling: receivers are insufficiently skeptical about undisclosed information. That is, receivers underestimate the extent to which no news is bad news. This complements the growing field evidence on attention and inference in disclosure contexts.

Jiang, Wang, and Wangerin (2017)

This study provides descriptive evidence about how the Financial Accounting Standards Board (FASB) sets Generally Accepted Accounting Principles (GAAP). Based on 211 financial accounting standards issued between 1973 and 2014, we report the reasons that the FASB adds or removes projects from its agenda, the parties most frequently bringing issues to the FASB's attention, and commonly recurring topics across different standards over time. The current version addresses some common questions such as the reason for adding or dropping issues to the FASB agenda, the entities that bring these issues to FASB's attention, the common themes across accounting standards over time, and the influence of board member's professional affiliation. We find that reducing diverse practices and inconsistent guidance is the most frequent reason cited by the FASB to take on a project and more than half of the standards are intended to enhance comparability. We find that the SEC, AICPA, and large public accounting firms are identified most frequently by the FASB as the parties bringing issues to its attention. Accounting for financial instruments is the most frequent recurring topic across accounting standards, which potentially explains the growth in fair value measurement in U.S GAAP over time. We analyze the dissenting opinions written by Board members and find some evidence that the stated reasons for disagreements are associated with their professional backgrounds. However, our analyses indicate Board members' positions on fair value accounting are context-specific and cannot be fully explained by their professional backgrounds. Extends the line of research pioneered by Allen and Ramanna (2013) who illustrate the value of analyzing the role of individual board members in standard setting decisions. Bloomfield et al. (2016) point out that in-depth descriptive studies are necessary before developing formal theories and carrying out statistical tests. Allen and Ramanna (2013) attribute the rise of fair value accounting to the increase of FASB Board members with a financial service background. They argue that starting from 1993, the FASB has had at least one Board member with a background in the financial service industry. We propose the alternative view that instead of Board members affecting standards, the topics in the standards (e.g., financial instruments) might be important factors considered by the FAF when making FASB appointment decisions. For example, it is possible that the Board appointments by the FAF of Leslie Seidman and Thomas Linsmeier, both wellknown experts on financial instruments, reflect the FASB's need for such expertise to address issues related to financial instruments at the time. Why is an issue added or dropped from the FASB agenda? • The biggest reason is achieving comparability, diverse practices or inconsistent treatment for similar transactions. • 10% are added to the agenda to improve international comparability with IFRS • The second most cited reasons for adding an issue to its agenda is implementation issues, reconsider or clarify certain guidance that creates implementation difficulties, or in some cases, they make requests for exceptions. • 3rd reason- changes in economic conditions or regulation, 10% of the total standards issued. • SEC is the most frequently mentioned organization explanation why FASB decides to add a project to its agenda, followed by AICPA and major accounting firms, auditors and preparers. Also based on the FASBs own review of existing GAAP or outreach activies.

Lambert, Jones, Brazel, and Showalter (2017)

Using publicly available data from annual reports, we find that SEC rule changes (33-8128 and 33-8644) that impose time pressure on the audits of registered firms have a negative impact on earnings quality, which we interpret as evidence of lower audit quality. Consistent with our predictions, we find that the 10-K accelerations reduced audit quality only when it actually reduced the number of days from year-end to audit report date, and that this effect was more acute for smaller, accelerated filers and during the initial deadline change (relative to the second). We also provide insights into the quality of these audits by conducting a survey of thirty-two retired audit partners. Survey results underscore the challenges time pressure imposes on receiving and evaluating complex valuations (such as for derivatives, pensions, and goodwill) and resolving audit adjustments. Securities and Exchange Commission (SEC) rules 33-8128 and 33-8644 substantially reduced the 10-K filing period for large accelerated filers and accelerated filers by 15 days, from 90 days after fiscal year-end to 60 and 75 days, respectively (SEC 2002, 2005).1 For many firms and their auditors, such regulation led to exogenously imposed year-end time pressure to meet the new filing deadlines. This setting provides a natural experiment that we use to provide archival evidence on the effect of time pressure on audit/earnings quality. We also provide rich qualitative information related to the pressure audit firms experienced during the acceleration periods, areas in which time pressure resulted in audit difficulties, the ways in which audit firms attempted to alleviate the pressure, and the resulting quality of accelerated audits. The combination of our archival and qualitative data allows us to further explore the impact of regulatory-induced pressure on audit firms and contribute to an emerging stream of literature that explores the impact of controversial regulatory changes on the quality of information supplied to financial statement users. Standard setting implication: overall, the combination of our archival and survey-based evidence should inform deliberations by U.S. and non-U.S. regulatory bodies considering future filing accelerations.2 Regulators should be acutely aware of the extent to which such accelerations may impact the amount of time pressure placed on the financial statement audit. Our aforementioned results related to accelerated filers suggest that caution should be taken before considering a further reduction for smaller firms (e.g., from 75 to 60 days) or expanding accelerations to even smaller, non-accelerated filers (who currently still face a 90-day filing deadline). If such accelerations are undertaken in the future, audit firms can strive to increase the extent to which the best practices we identify can be implemented on a particular audit. I

Asay, Libby, and Rennekamp (2017)

Using two experiments and a survey of experienced managers, we investigate the determinants of disclosure readability and other linguistic characteristics in a controlled setting. We predict and find that participants provide reports that are significantly less-readable when performance is bad than when performance is good, particularly when participants have a stronger self-enhancement motive in the form of a reporting goal to portray the firm as favorably as possible. Further, we find that the difference in readability between good and bad news disclosures appears to be driven primarily by participants making good news disclosure more readable rather than by making bad news disclosures less readable. When directly asked about the reports that they have prepared, we also find that neither firm performance nor reporting goals affect participants' perceptions of the readability of their reports or their ratings of the extent to which they were motivated to make the report easier or more difficult to read. Further, our survey participants believe the most likely explanation for our pattern of results is that managers increase the readability of good news disclosures in order to highlight the positive performance. Combined, the results do not appear to support arguments made in prior literature that managers intentionally obfuscate poor performance by making disclosures less readable. In addition to the results discussed above on readability, we also provide evidence on how other linguistic choices are affected by variation in firm performance and reporting goals. Specifically, we find that participants use more passive voice and fewer first person singular pronouns when news is bad - a technique that distances the manager from the information conveyed. Further, the effect of bad news on the use of passive voice is larger when participants have a favorable reporting goal. We also find that participants include more causal words and use words that focus more on the future when performance is bad than when performance is good. Debriefing questions that directly ask participants about their reporting choices suggest that they are aware of these differences, and survey participants believe the most likely explanation for these results is that managers try to provide more information about the future in order to satisfy investors' demand

Bochkay, Chava and Hales (2016)

We develop a dictionary of linguistic extremity using the words and phrases spoken in earnings conference calls and analyze how investors react to this extreme language. We document that investors respond more strongly to extreme, rather than moderate, language in earnings conference calls as evidenced by significantly higher abnormal trading volume and stock returns around the calls. Further, these results are more pronounced for firms with weaker information environments. We also find that linguistic extremity contains information about a firm's future operating performance and there is a strong association between extreme tone and analyst forecast revisions following the calls. Our results suggest that investors are influenced not just by what managers say when communicating performance, but also how they say it, with extreme language largely reflecting reality, rather than hyperbole. We use a large sample of conference calls to develop a comprehensive dictionary for spoken language in earnings conference calls. Each entry in our dictionary is ranked by human annotators according to its tone (positive or negative) and its linguistic extremity (the extent to which it is positive or negative). We find that extreme language in the conference call significantly increases trading volume and prompts significant price revisions around the call. Investors react strongly to both positive and negative extreme tone. In addition, we find that firms with weaker information environments and higher information processing costs have stronger return and volume reactions to extreme language. We also find that analysts respond to extreme language as evidenced by forecasts revisions following earnings calls. Additionally, we find that managers use extreme language to inform investors as extreme tone is strongly associated with future operating performance. Finally, we find that investors appear to find linguistic extremity in earnings conference calls credible and price the information in earnings news and manager's choice of language correctly.

Wang, Spezio, and Camerer (2010)

We report experiments on sender-receiver games with an incentive for senders to exaggerate. Subjects "overcommunicate" -- messages are more informative of the true state than they should be, in equilibrium. Eyetracking shows that senders look at payoffs in a way that is consistent with a level-k model. A combination of sender messages and lookup patterns predicts the true state about twice as often as predicted by equilibrium. Using these measures to infer the state would enable receiver subjects to hypothetically earn 16-21 percent more than they actually do, an economic value of 60 percent of the maximum increment. Reports experiments on sender-receiver games with an incentive for biased transmission (such as managers or security analysts painting a rosy picture about a corporation's earnings prospects). Senders observe a state S, an integer 1-5, and choose a message M. Receivers observe M (but not S) and choose an action A. The sender prefers that the receiver choose an action A=S+b, which is b units higher than the true state, where b=0 (truth telling is optimal), or b=1 or b=2. But receivers know the payoff structure, so they should be suspicious of inflated messages M. 1. Overcommunication in sender-receiver game is consistent with L0, L1, L2, and equilibrium (EQ) sender behavior produced by a level-k model of the sender-receiver game in which L0 sender behavior is anchored at truth telling. 2. Eyetracking data provide the following support for the level-k model: a. Attention to structure and own payoffs: Sender subjects pay attention to important parameters (state and bias) of the sender-receiver game. This indicates subjects are thinking carefully about the basic structure of the game, even if they are not following equilibrium theory. Sender subjects also look at their own payoffs more than their opponents'. b. Truth bias: Sender subjects focus too much on the true state payoff row. This bias is consistent with a failure to "think in the opponent's shoes" as in Meghana Bhatt and Colin F. Camerer (2005). c. Individual level-k lookup patterns: Sender subjects focus on the payoffs corresponding to the action A = S (L0 reasoning), A = S + b (L1 reasoning),..., up to the corresponding level-k reasoning for each individual subject based on his or her level-k type. This indicates particular level-k type subjects do generally exhibit the corresponding lookup patterns. 3. Right before and after the message is sent, senders' pupils dilate more when their deception is larger in magnitude. This suggests subjects feel guilty for deceiving (as in Uri Gneezy 2005), or deception is cognitively difficult (as the level-k model assumes). 4. Prediction: Based on the eyetracking results, we can try to predict the true state observed by the sender using lookup data and messages. This prediction exercise suggests it is possible to increase the receiver's payoff (beyond what was earned in the experiments) by 16-21 percent, resulting in an economic value of 60 percent of the maximum achievable increase. This is the first study in experimental economics to use a combination of video-based eyetracking and pupil dilation, and is, of course, exploratory and is therefore hardly conclusive. But the eyetracking and pupil dilation results by themselves suggest that the implicit assumption in equilibrium theories of "cheap talk" in games with communication—namely, that deception has no (cognitive or emotional) cost—is not completely right.

Bernard, Cade, Hodge (2017)

Whereas prior research emphasizes the monitoring benefits of concentrated stock ownership, we examine a variety of potential benefits of dispersed ownership for public companies. Using a field experiment to rule out reverse causality, we examine whether a small investment in a company's stock leads investors to purchase more of the company's products and adopt other views and preferences that benefit the company. We find little evidence consistent with our hypotheses using frequentist statistics, and Bayesian parameter estimation shows substantial downward belief revision for more optimistic ex ante expectations of the treatment effects for the average investor. However, we do find that the effects of ownership on product purchase behavior and on regulatory preferences are intuitively stronger for certain subgroups of investors—namely, for investors who are most likely to consume the types of products offered by the company and for investors who are most likely to be politically active, respectively. The results contribute to our understanding of the benefits of dispersed stock ownership and are informative to public company managers and directors. A large body of literature studies the benefits of concentrated stock ownership on public firm performance. Consistent with the strong incentives of concentrated owners to monitor management, prior papers have linked concentrated ownership by certain institutions to reduced excess CEO compensation, greater corporate innovation, and better post-merger performance, among other outcome measures. In contrast, relatively few papers have considered potential benefits of dispersed ownership. Contribution: adds to the growing literature in accounting that examines the effect of investment position on individuals' judgments and decisions (Elliott, Rennenkmap, and White 2016, Fanning, Agoglia and Piercy 2015, Hales 2007, Seybert and Bloomfield 2009). We predict that even small amounts of stock ownership can change individual investors' behaviors in ways that benefit the firm. In particular, we predict that stock ownership leads investors to purchase more of the firm's products and adopt other views and preferences that benefit the firm. Although our primary focus is on product and regulatory preferences that could positively affect the firm's operating performance, we also provide evidence on other potential capital market benefits associated with dispersed stock ownership—e.g., the effects on investors' earnings expectations and assessments of financial reporting and earnings quality. We further expect these effects are not limited to the investors themselves; i.e., we expect investors' altered behavior to spread within social networks to influence the behavior of friends, family, and colleagues. Thus, dispersed ownership in the aggregate could improve firm performance, particularly to the extent investors influence others to adopt their preferences and behaviors. H1: A small investment in a company's stock causes investors to purchase more of the company's products. H2: A small investment in a company's stock causes investors to hold regulatory views more closely aligned with the financial interests of the company. A key challenge to identify the effect of stock ownership on individual investor behaviors is to rule out reverse causality. In contrast to prior archival and survey evidence, we address the reverse causality confound by generating data using a field experiment in which investors are randomly assigned stock ownership. After several months, we collect data on actual purchase behavior and on a number of investors' other views and preferences.

Bloomfield (1996)

• Three-stage experiment: o Stage 1: Investors trade securities in No Discretion/High-Availability and No Discretion/Low-Availability Settings o Stage 2: Subjects from first stage act as managers in choosing how to report information on each security to a cohort of new investors Total payoff is a function of security price They can increase or decrease public signal at a cost but cannot change total value of the security o Stage 3: New subjects trade in Discretion/High-Availability and Discretion/Low-Availability Settings • Main Findings: o Without reporting discretion: markets react completely to public information but under-react to private signals, with the under-reactions more severe when the private signals are less widely available Markets more efficient in high availability setting (Bloomfield 1996) o Managers realize the market inefficiency and inflate (deflate) the public (private) signal, and do so more consistently when the private signals are less widely available (marginal benefit to inflation is greater) Managers inflate (deflate) the public (private) signal more consistently in the low availability setting o Investors anticipate and undo the effects of managers' reporting decisions - average market prices are no higher under discretion versus no discretion setting (prisoner's dilemma: managers engage in costly report inflation that results in no average benefit) No difference in average pricing errors between discretion and no discretion setting o Discretion leads markets to under-react to the public signals because investors have difficult using the realization of the manipulated public signal to estimate the extent of report inflation Investors know that higher realizations of public signal indicate greater likelihood of being inflated but overestimate that likelihood leading market price to be too low when the public signal is high.


Kaugnay na mga set ng pag-aaral

Health and Skill Related Components of Fitness

View Set

Understanding Scripture Ch. 19 What Jesus Did

View Set

DS 3841 MIS TTU Final Exam Chapter 13, 14 & 15

View Set

Unit 3: Care and Feeding of Your Ohio License

View Set