Topics you need to know

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Negative relationship.

A negative relationship would imply that higher returns are associated with less risk.

Databases and Data Structures

Data Structures in Accounting Systems—All information and instructions used in IT systems are executed in binary code (i.e., zeros and ones). This section looks at how the zeros and ones are strung together to create meaning. A. Bit (binary digit)—An individual zero or 1; the smallest piece of information that can be represented. B. Byte—A group of (usually) eight bits that are used to represent alphabetic and numeric characters and other symbols (3, g, X, ?, etc.). Several coding systems are used to assign specific bytes to characters; ASCII and EBCIDIC are the two most commonly used coding systems. Each system defines the sequence of zeros and ones that represent each character. C. Field—A group of characters (bytes) identifying a characteristic of an entity. A data value is a specific value found in a field. Fields can consist of a single character (Y, N) but usually consist of a group of characters. Each field is defined as a specific data type. Date, Text, and Number are common data types. D. Record—A group of related fields (or attributes) describing an individual instance of an entity (a specific invoice, a particular customer, an individual product). E. File—A collection of records for one specific entity (an invoice file, a customer file, a product file).In a database environment, files are sometimes called tables. F. Database—A set of logically related files. Study Tip Except for "file," the words get longer as the units get bigger: Bit (3 characters) Byte (4 characters) Field (5 characters) Record (6 characters) File (4 characters) Database (8 characters) Flat and Proprietary files A. Data in relational databases is stored in a structured, "normalized" form. Normalization means that the data is stored in an extremely efficient form that minimizes data redundancy and helps prevent data errors. But the problem with normalized data is that it is difficult to use—outside of the database. B. To share data outside of a relational database, data is often stored as text using a delimiter to separate the fields. This is called a "flat file." 1. Two examples of flat file types are CSV (comma-separated values) and TSV (tab-separated values), where the delimiters for these file types are (obviously) commas ("," the "C" in CSV) and tabs (" " the "T" in TSV). 2. AICPA Audit Data Standards recommend the use of a "pipe" (i.e., "|") as a delimiter for flat files since the pipe is rarely used in Western languages (e.g., English). 3. Flat files are great for sharing simple data sets, but they are inefficient in their storage of complex data sets since flat files are intentionally not normalized. So, flat files are easy to use but inefficient for complex data sets and they include data redundancy. C. Many types of propriety file types also exist for sharing files. These files types are created by organizations or individuals within specific software packages. 1. Currently, the most common proprietary file type for file sharing is Microsoft Excel (.xls or .xlsx). Another example of a proprietary file type is PDF (i.e., portable document form), which was created by Adobe. 2. An advantage of sharing Excel files is that almost every user can open and share them. 3. A disadvantage of sharing Excel files is that they are limited to about 1 million rows, which seems like a lot of data, until one starts working with "big data" sets. Databases A. A Set of Logically Related Tables (or Files)—Most business data is highly interrelated, and consequently, most business data is stored in databases. Database Management System 1. A system for creating and managing a well-structured database. A "middle-ware" program that interacts with the database application and the operating system to define the database, enter transactions into the database, and extract information from the database; the DBMS uses three special languages to accomplish these objectives: a. Data definition language (DDL)—Allows the definition of tables and fields and relationships among tables. b. Data manipulation language (DML)—Allows the user to add new records, delete old records, and to update existing records. c. Data query language (DQL)—Allows the user to extract information from the database; most relational databases use structured query language (SQL) to extract the data; some systems provide a graphic interface that essentially allows the user to "drag and drop" fields into a query grid to create a query; these products are usually called query-by-example (QBE). 2. Some examples of relational database management systems a. Microsoft SQL server b. Oracle database c. Oracle MySQL d. Teradata 3. Database controls—A DBMS should include the following controls: a. Concurrent access issues management—For example, to control multiple users attempting to access the same file or record. These are to prevent record change logouts and errors. b. Access controls—i.e., tables that specify who can access what data within the system. Used to limit access to appropriate users. c. Data definition standards—For example, to determine and enforce what data elements must be entered and which are optional. These improve and ensure data quality. d. Backup and recovery procedures—To ensure integrity in the event of system errors or outages. These prevent data loss and corruption. e. Update privileges—Define who can update the data, and when the data can and should be updated. These are used to control data changes and improve data integrity. f. Data elements and relationships controls—To ensure data accuracy, completeness, and consistency. F. Database Terminology and Principles 1. A view is a presentation of data that is selected from one or more tables. 2. A view is often created with a query—which is a request to create, manipulate, or view something in a database. 3. The schema is the structure of the database. That is, what are the tables in the database and how do the tables connect with one another? 4. How do the tables connect with one another? They are connected through keys (primary and foreign). Keys are a field or column that uniquely identifies every record (or row) in a database. a. For example, imagine that have a table of customers and table of customer orders. The primary key of customer ID will uniquely identify every customer in the customer table and will connect (link) the customer table to the orders table. b. A foreign key is an attribute in a table that is used to find a record (or row) in a table. A foreign key does not uniquely identify a record or row in a table. A foreign and secondary key are the same thing. c. While customer ID is the primary key of the customer table, it will be a foreign key in the orders table. The primary key of the orders table will order ID. d. The customer ID (the foreign key in the orders table) will connect the order table to the customer table, but not every order in the order table will have a unique customer ID (since customers, we hope, will place more than one order). However, every order will have a unique order ID, which is why it is the primary key for the orders table. e. Finally, a secondary key is an attribute in a table that is used to find a record (or row) in a table. A secondary key does not uniquely identify a record or row in a table. For example, in an order table, the order number will be the primary key and the customer ID will be a secondary key. A foreign key are different terms for the same thing.

move tickets

documents that identify the internal transfer of parts, the location to which they are transferred, and the time of the transfer

spontaneous financing

financing that arises during the natural course of business without the need for special arrangements Accounts Payable, since companies can give the business a credit.

Valuation Techniques—Option Pricing

Option: A contract that entitles the owner (holder) to buy (call option) or sell (put option) an asset (e.g., stock) at a stated price within a specified period. Financial options are a form of derivative instrument (contract). 1. Under terms of an American-style option, the option can be exercised any time prior to expiration. 2. Under terms of a European-style option, the option can be exercised only at the expiration (maturity) date. Valuing Options 1. An option may or may not have value. 2. Valuing an option, including determining it has no value, is based on six factors: a. Current stock price relative to the exercise price of the option—the difference between the current price and the exercise price affects the value of the option. The impact of the difference depends on whether the option is a call option or a put option. i. Call option (a contract that gives the right to buy)—A current price above the exercise (or strike) price increases the option value; the option is considered "in the money." The greater the excess, the greater the option value. ii. Put option (a contract that give the right to sell)—A current price below the exercise price increases the option value; the option is considered "in the money." The lower the current price relative to the exercise price, the greater the option value. b. Time to the expiration of the option—the longer the time to expiration, the greater the option value (because there is a longer time for the price of the stock to go up). c. The risk-free rate of return in the market—The higher the risk-free rate, the greater the option value. d. A measure of risk for the optioned security, such as standard deviation—The larger the standard deviation, the greater the option value (because the price of the stock is more volatile; goes up higher and down further than its market changes). e. Exercise price f. Dividend payment on the optioned stock—The smaller the dividend payments, the greater the option value (because more earnings are being retained).There is a direct relationship between these factors and the fair value of an option. Black-Scholes Model (highlighted part is how the black-scholes model criteria) A. The original Black-Scholes model was developed to value options under specific conditions; thus, it is appropriate for 1. European call options, which permit exercise only at the expiration date 2. Options for stocks that pay no dividends 3. Options for stocks whose price increases in small increments 4. Discounting the exercise price using the risk-free rate, which is assumed to remain constant B. As with other models used to estimate fair value of an option, the Black-Scholes method uses the six factors cited above. The advantage of the Black-Scholes model is the addition of two elements: 1. Probability factors for: a. The likelihood that the price of the stock will pay off within the time to expiration, and b. The likelihood that the option will be exercised. 2. Discounting of the exercise price C. Many of the limitations (condition constraints) in the original Black-Scholes model have been overcome by subsequent modifications, so that today modified Black-Scholes models are widely used in valuing options. D. The underlying theory of the Black-Scholes method and the related computation can be somewhat complex. Therefore, its use is best carried out using computer applications. Binomial Option Pricing Model (BOPM) A. The binomial option pricing model (BOPM) is a generalizable numerical method for the valuation of options. 1. The BOPM uses a "tree" to estimate value at a number of time points between the valuation date and the expiration of the option. 2. Each time point where the tree "branches" represents a possible price for the underlying stock at that time. 3. Valuation is performed iteratively, starting at each of the final nodes (those that may be reached at the time of expiration), and then working backwards through the tree towards the first node (valuation date). 4. The value computed at each stage is the value of the option at that point in time, including the single value at the valuation date. The stock has an 80% chance of selling at $72.50 at the end of the option period. That is $12.50 above the option price. The stock has a 20% chance of selling at $65.00 at the end of the option period. That is $5.00 above the option price. Therefore, the value of the option is: [(.80 x $12.50) + (.20 x $ 5.00)]/1.10, or [($10.00) + ($1.00)]/1.10, or $11.00/1.10 = $10.00 x 100 shares = $1,000

Gross Measures—Economic Activity

Gross Measures—Common measures of the total activity or output of the U.S. economy include: A. Nominal Gross Domestic Product (Nominal GDP)—Measures the total output of final goods and services produced for exchange in the domestic market during a period (usually a year). 1. GDP does not include: a. Goods or services that require additional processing before being sold for final use (i.e., raw materials or intermediate goods); b. Goods produced in a prior period, but sold in the current period (those goods are included in GDP of the prior period); c. Resale of used goods (which does not create new goods); d. Activities for which there is no market exchange (e.g., do-it-yourself productive activities); e. Goods or services produced in foreign countries by U.S.-owned entities (only domestically produced goods/services are included); f. Illegal activities; g. Transfer payments (e.g., welfare payments, social security, etc.; any effect on GDP occurs when the payments are spent on goods/services); h. Financial transactions that simply transfer claims to existing assets (e.g., stock transactions, incurring/paying debt, etc., which in themselves do not produce a good/service); or i. Adjustments for changing prices of goods and services over time. 2. As shown in the free-market flow model, in theory, the amount spent (expenditures) for goods and services should equal the amount received (income) for providing those goods and services. Therefore, there are two means of calculating GDP—the expenditures approach and the income approach (also called gross domestic income [GDI]). a. Expenditure approach—This measures GDP using the value of final sales and is derived as the sum of the spending of: i. Individuals—In the form of consumption expenditures (C) for durable and non-durable goods and for services. ii. Businesses—In the form of investments (I) in residential and nonresidential (e.g., plant and equipment) construction and new inventory. iii. Governmental entities—In the form of goods and services purchased by governments (G). iv. Foreign buyers—In the form of net exports [exports (X) - Imports (M)] of U.S.-produced goods and services. This approach may be expressed as: GDP = C + I + G + (X - M) b. Income approach—This measures GDP as the value of income and resource costs and is derived as the sum of: This approach may be expressed as: GDP (also GDI) = Wages + Self-employment income + Rent + Interest + Profits + Indirect business taxes + Depreciation + Income of foreigners Real Gross Domestic Product (Real GDP)—Measures the total output of final goods and services produced for exchange in the domestic market during a period (usually a year) at constant prices. 1. Gross domestic product (GDP) deflator—The GDP deflator is a comprehensive measure of price levels used to derive real GDP. It relates the price paid for all new, domestically produced goods and services during a period to prices paid for goods and services in a prior reference (base) period. The specific goods and services included change from year to year based on changes in consumption and investment patterns in the economy. Using the GDP deflator, the calculation of real GDP would be: Real GDP = (Nominal GDP/GDP Deflator) × 100 2. If the nominal GDP and the real GDP are known, the formula can be rearranged to determine the GDP deflator. That formula would be: GDP Deflator = (Nominal GDP/Real GDP) × 100 3. Real GDP measures production in terms of prices that existed at a specific prior period; that is, it adjusts for changing prices using a price index (the GDP deflator). 4. During a period of rising prices (i.e., inflation, which is the historic norm), the application of a price index to nominal GDP can result in a real GDP that is lower than nominal GDP. A period of high inflation and a small increase in nominal GDP could result in a decrease in real GDP for the period. 5. Real GDP per capita measures the GDP per individual. Real GDP per capita is calculated as: a. Real GDP/Population. b. Real GDP per capita is a common measure of the standard of living in a country. c. Changes in real GDP per capita measures changes in the standard of living and, therefore, economic growth or decline. Potential Gross Domestic Product (Potential GDP)—Measures the maximum final output that can occur in the domestic economy at a point in time without creating upward pressure on the general level of prices in the economy. The point of maximum final output will be a point on the production-possibility frontier for the economy. Net Domestic Product (NDP)—Measures GDP less a deduction for "capital consumption" during the period—the equivalent of depreciation. Thus, NDP is GDP less the amount of capital that would be needed to replace capital consumed during the period. Gross National Product (GNP)—Measures the total output of all goods and services produced worldwide using economic resources of U.S. entities. In 1992 GNP was replaced by GDP as the primary measure of the U.S. economy. GNP includes both the cost of replacing capital (the depreciation factor) and the cost of investment in new capital. Net National Product (NNP)—Measures the total output of all goods and services produced worldwide using economic resources of U.S. entities, but unlike GNP, NNP includes only the cost of investment in new capital (i.e., no amount is included for depreciation). National Income—Measures the total payments for economic resources included in the production of all goods and services, including payments for wages, rent, interest, and profits (including income earned by the government from "business-like" activities), but it does not include indirect business taxes included in the cost of final output (sales taxes, business property taxes, etc.), or government transfer payments. Personal Income—Measures the amount (portion) of national income, before personal income taxes, received by individual and non-profit corporations plus transfer payments from the government (including, for example, unemployment insurance benefits, veteran benefits, disability payments, welfare program payments, certain government subsidiaries), less employee social security insurance payments and undistributed business profits. Personal Disposable Income—Measures the amount of income individuals have available for spending, after taxes are deducted from total personal income. A productive-possibility curve measures the maximum amount of various goods and services an economy can produce at a given time with available technology and efficient use of all available resources. National income is the total payments for economic resources included in the production of all goods and services. During the recessionary phase of a business cycle, actual national income is typically less than potential national income as a result of decreased demand, which is characteristic of the recessionary phase of a business cycle and the resulting decrease in payments for goods and services.

Managing Cyber Risk: Part II—A Framework for Cybersecurity

B. In 2013, then-President Obama issued an executive order to enhance the security and resilience of the U.S. cyber environment. The resulting cyber risk framework was a collaboration between the government and the private sector. C. The goals of the framework included creating a common language for understanding, and cost-effective means for managing, organizational cybersecurity risks without imposing regulations. Specifically, the framework provides a means for organizations to: 1. Describe current cybersecurity and existing risks 2. Describe target or goal cybersecurity and desired types and levels of risk 3. Identify and prioritize improvements to cybersecurity 4. Assess progress toward the cybersecurity goal 5. Communicate with stakeholders about cybersecurity and risk The framework complements, but does not replace, existing risk management (e.g., COSO) and cybersecurity (e.g., COBIT) approaches. The framework consists of three parts: the core, the profile, and the implementation tiers. A. Framework Structure 1. The core includes cybersecurity activities, outcomes, and references (i.e., standards and guidelines). 2. The profiles help align organizational cybersecurity activities with business requirements, risk tolerances, and resources. 3. The implementation tiers are a way to view and understand alternative approaches to managing cybersecurity risk. B. The framework core is a matrix of four columns or elements by five rows or functions that lists activities (with examples) to achieve specific cybersecurity outcomes. The four core elements (functions, categories, subcategories, and references) are types or levels (shown as columns), and the five functions or activities (identify, protect, detect, respond, recover) appear in the following figure: (look at figure on chapter). The core elements relate to one another. A. Functions organize basic, high-level cybersecurity activities and include: Identify, Protect, Detect, Respond, and Recover (see descriptions below). They help manage cybersecurity risk by organizing information, enabling risk management, addressing threats, and enabling learning through monitoring. 1. The functions align with existing methods for incident management and help in assessing the value of cybersecurity investments. For example, investments in cybersecurity planning improve responses and recovery, which reduces the effect of cybersecurity events on service quality. B. Categories are high-level cybersecurity outcomes that link to organizational needs and activities. Examples of categories are: asset management, access control, physical security, and incident detection processes. C. Subcategories divide categories into specific outcomes of technical and/or management activities. In accounting and auditing terms, these are high-level control goals. Examples include: Identify and catalog external information systems; Protect data at rest; and Investigate notifications from detection systems. D. (Informative) References are specific standards, guidelines, and practices that provide benchmarks and methods for achieving the control goals (i.e., outcomes) found in the subcategories. IV. The five core functions (listed here) may be performed periodically or continuously to address evolving cybersecurity risks. A. Identify—Develop the foundational understanding to manage organizational cybersecurity risk by identifying and assessing organizational systems, assets, data, and capabilities. 1. Identification activities include understanding the business context, the resources that support critical functions, and the related cybersecurity risks. Examples of outcome categories within this function include: asset management, business environment assessment, governance assessment, risk assessment, and risk management strategy. B. Protect—Develop and implement controls to ensure delivery of critical infrastructure services. 1. The protect function enables preventing, detecting, and correcting cybersecurity events. Examples of outcome categories within this function include: access control, awareness and training, data security, information protection processes and procedures, maintenance, and protective technology. C. Detect—Develop and implement controls to identify cybersecurity incidents. This topic is covered in the "Computer Crime, Attack Methods, and Cyber-Incident Response" lesson. Examples of outcome categories within this function include: anomalies and events, security continuous monitoring, and detection processes. D. Respond—Develop and implement controls to respond to detected cybersecurity events. Examples of outcome categories within this function include: response planning, communications, analysis, mitigation, and security improvements. E. Recover—Develop and implement controls for building resilience and restoring capabilities or services impaired due to a cybersecurity event. Examples of outcome categories within this function include: recovery planning, improvements, and communications. V. Implementation Tiers A. Implementation tiers identify the degree of control that an organization desires to apply to cybersecurity risk. The tiers range from partial (Tier 1) to adaptive (Tier 4) and describe an increasing degree of rigor and sophistication in cybersecurity risk management and integration with an organization's overall risk management practices. 1. Organizations should determine their desired tier, consistent with organizational goals, feasibility, and cyber risk appetite. Given rapidly changing cyber risks, the assessment of tiers requires frequent attention. B. The four tier definitions are as follows: 1. Tier 1: Partial a. Risk Management—Organizational cybersecurity risk management practices are informal. Risk is managed as ad hoc and reactive. Prioritization of cybersecurity activities may not be directly informed by organizational risk objectives, the threat environment, or business requirements. b. Integrated Risk Management Program—Limited awareness of cybersecurity risks with no organization-wide approach to managing cybersecurity risk. Cybersecurity risk management occurs irregularly on a case-by-case basis. Organizational sharing of cybersecurity information is limited. c. External Participation—Organization has weak or nonexistent processes to coordinate and collaborate with other entities. 2. Tier 2: Risk Informed a. Risk Management—Management approves risk management practices when needed but not as a part of an organizational-wide policy. Prioritization of cybersecurity activities is informed by organizational risk objectives, the threat environment, or business requirements. b. Integrated Risk Management Program— While there is some awareness of organizational cybersecurity risk, there is no established, organization-wide approach to managing cybersecurity risk. Risk-informed, management-approved processes and procedures are defined and implemented, and staff have adequate resources for cybersecurity duties. Organizational cybersecurity information sharing is informal and as needed. c. External Participation—The organization assesses and understands its cybersecurity roles and risks but has not formalized its capabilities to share information externally. 3. Tier 3: Repeatable a. Risk Management Process—The organization's risk management practices are formally approved as policy. Organizational cybersecurity practices are regularly updated based on the application of risk management processes to changes in business requirements and changing threats and evolving technologies. b. Integrated Risk Management Program—Organization-wide management of cybersecurity risk exists. Management has risk-informed policies, processes, and procedures that are defined, implemented, and regularly reviewed. Consistent, effective methods respond to changes in risk. Personnel possess the knowledge and skills to perform their appointed roles and responsibilities. c. External Participation—The organization understands its dependencies and communicates with cyber security partners to enable collaboration and risk-based management in response to incidents. 4. Tier 4: Adaptive a. Risk Management Process—The organization adapts its cybersecurity practices based on experience and predictive indicators derived from cybersecurity activities. Continuous improvement processes include advanced cybersecurity technologies and practices. The organization actively adapts to a changing cybersecurity landscape and responds to evolving and sophisticated threats in a timely manner. b. Integrated Risk Management Program—An organization-wide approach to managing cybersecurity risk uses risk-informed policies, processes, and procedures to address cybersecurity events. Cybersecurity risk management is part of the organizational culture and evolves from an awareness of previous activities, information shared by other sources, and continuous awareness of activities on their systems and networks. c. External Participation—The organization manages risk and actively shares information with partners to ensure that accurate, current information is shared to improve collective cybersecurity before a cybersecurity event occurs. VII. Recommended Framework Applications A. Review and Assess Cybersecurity Practices. The organization asks, "How are we doing?" with respect to cybersecurity and cyber risk, including comparing current cybersecurity activities with those in the core. This review does not replace formal, organization-wide risk management (e.g., guided by COSO). This assessment may reveal a need to strengthen some and reduce other cybersecurity practices and risks. This reprioritizing and repurposing of resources should reduce prioritized cyber risks. B. Establish or Improve a Cybersecurity Program. The following steps illustrate use of the framework to create a new cybersecurity program or improve an existing program. Organizations may repeat these steps as needed. 1. Prioritize risks and determine scope. After identifying its mission, objectives, and high-level priorities, the organization makes strategic decisions regarding the scope and purpose of cybersecurity systems and the assets needed to support these objectives. 2. Link objectives to environment. The organization identifies its systems and assets, regulatory requirements, and overall risk approach to support its cybersecurity program. It also identifies threats to, and vulnerabilities of, those systems and assets. 3. Create current profile. Develop a current profile by indicating which category and subcategory outcomes from the framework core are achieved currently. 4. Conduct risk assessment. Guided by the overall risk management process or previous risk assessment, analyze the operational environment to determine the likelihood of a cybersecurity event and its potential impact. Monitor for and consider emerging risks and threats and vulnerable data to understand risk likelihood and impact. 5. Create a target profile. Assess framework categories and subcategories to determine desired cybersecurity outcomes. Consider additional categories and subcategories to account for unique organizational risks, as needed (e.g., financial fraud risk in financial institutions). Consider influences and requirements of external stakeholders, such as sector entities, customers, and business partners. 6. Determine, analyze, and prioritize gaps. Compare current and target profile to determine gaps. Create a prioritized action plan to address gaps. Determine resources necessary to address gaps. 7. Implement action plan. Determine and implement actions to address gaps. C. Communicate Cybersecurity Requirements to Stakeholders 1. Use the framework's common language to communicate requirements among interdependent stakeholders for the delivery of essential critical infrastructure services. Examples include: - Create a target profile to share cybersecurity risk management requirements with an external party (e.g., a cloud or Internet service provider or external auditor or regulator). -Determine the current cybersecurity state to report results as a part of a control review. -Use a target profile to convey required categories and subcategories to an external partner. -Within a critical infrastructure sector (e.g., the financial services industry), create a target profile to share as a baseline profile. Use this baseline profile to build tailored, target profiles that are customized to specific organizational members' cyber security risks and goals. D. Identify Opportunities to Adapt or Apply New or Revised References 1. Use the framework to identify opportunities for new or revised standards, guidelines, or practices where additional references would help address emerging risks. 2. For example, an organization implementing a given subcategory, or developing a new subcategory, might discover that there are few relevant informative references for an activity. To address that need, the organization might collaborate with technology leaders or standards bodies to draft, develop, and coordinate standards, guidelines, or practices. E. Protect Privacy and Civil Liberties 1. Privacy and civil liberty risks arise when personal information is used, collected, processed, maintained, or disclosed in connection with an organization's cybersecurity activities. Examples of activities with potential privacy or civil liberty risks include: a. Cybersecurity activities that result in the over-collection or over-retention of personal information (e.g., phone records or medical information) b. Disclosure or use of personal information unrelated to cybersecurity activities c. Cybersecurity activities that result in denial of service or other similar potentially adverse impacts (e.g., when cybersecurity results in service outages at an airport or secured government building) 2. Processes for addressing privacy and civil liberty risks a. Governance of cybersecurity risk i. The organization's assessment of cybersecurity risk and responses should consider privacy implications. ii. Individuals with cybersecurity-related privacy responsibilities report to appropriate management and receive appropriate training (e.g., in generally accepted privacy principles [GAPP]). iii. Organizational processes support compliance of cybersecurity activities with applicable privacy laws, regulations, and constitutional requirements. iv. The organization continuously and periodically assesses the privacy implications of its cybersecurity measures and controls. b. Processes for identifying and authorizing individuals to access organizational assets and systems i. Identify and address the privacy implications of data access when such data include collecting, disclosing, or using personal information. c. Awareness and training measures i. Cybersecurity workforce training includes training in organizational privacy policies and relevant privacy regulations. ii. The organization informs service providers of the organization's privacy policies and monitors for service providers for compliance with these policies. d. Anomalous activity detection and system and assets monitoring i. The organization conducts privacy reviews of anomalous activity detection and cybersecurity monitoring. e. Response activities, including information sharing or other mitigation efforts i. The organization assesses and addresses whether, when, how, and extent to which personal information is shared outside the organization as part of cybersecurity assessment and monitoring. ii. The organization conducts privacy reviews of cybersecurity initiatives.

Accounts Receivable Management

From an accounting perspective, accounts receivable management is concerned with the conditions leading to the recognition of receivables (the debit) and the process that results in eliminating the receivable (the credit). Therefore, this lesson will consider: 1. Establishing general terms of credit 2. Determining customer creditworthiness and setting credit limits 3. Collecting accounts receivable Establishing General Terms of Credit—If sales are to be made on credit, the firm must establish the general terms under which such sales will be made. To a certain extent, for competitive reasons the terms of sale adopted by a firm will need to approximate terms established in its industry. Specific terms of sale decisions to be made include: A. Total Credit Period—Establishes the maximum period for which credit is extended. Typical industry practice reflects that the length of the credit period relates to the "durability" of goods sold. For example, firms that sell perishable goods (e.g., fresh produce) typically have a shorter credit period than firms that sell more durable goods. This credit period establishes the length of time the firm is expected to finance its sales on credit and for which it must, in turn, have financing. B. Discount Terms for Early Payment—If a discount is to be offered for early payment of accounts, the discount rate and period must be decided. The combination of the discount rate and period will determine the effective interest rate associated with the discount offered which, in turn, will determine the effectiveness of the discount policy. As we saw in the earlier discussion of trade accounts payable, the effective interest rate on cash discounts not taken usually are significant. 1. The rate and period a firm can economically offer depends on the margin realized on its sales and its cost of financing its accounts receivable. Practically, the rate and period will need to be competitive with other firms in the industry. C. Penalty for Late Payment—Determines the penalty to be assessed if customers don't pay by the final due date, including the length of any "stretch" period before the penalty applies. The penalty should at least cover the cost of financing the accounts receivable for the overdue period. D. Nature of Credit Sales Documentation—Determines the form of documentation to be required from customers at the time they purchase on account. The most common arrangement is to sell on an open account, that is, an implicit contract documented only by a receipt signed by the buyer. If the amount being charged is very large or if the buyer's credit is suspect, a firm will likely require more formal documentation, such as a commercial draft. If foreign sales are to be made, appropriate processes will have to be decided upon. Determining Customer Creditworthiness and Setting Credit Limits A. The decisions here are to determine whether a customer can buy on account and, if so, what maximum amount can be charged. In making these decisions it is critical to recognize that the objective is to maximize profits, not to minimize credit losses. A policy that is too stringent will result in a failure to make sales that would be paid, resulting in lower losses on accounts receivable, but also resulting in lost revenues. B. When a customer is considered for credit, there are two major approaches to determining whether to grant credit and at what level: 1. Credit-rating service—A number of firms are in the business of assessing the creditworthiness of individuals and businesses, including Equifax, Experian, TransUnion, and Dun and Bradstreet. Reports from these agencies provide considerable information about a potential credit customer, including a score that reflects relative creditworthiness. Such scores can be used in both making the credit decision and in establishing a credit limit. Other sources of information about prospective credit customers include trade associations, banks, and chambers of commerce, among others. 2. Financial analysis—In some cases, a firm may undertake its own analysis of a prospective credit customer. Since this can be an expensive undertaking, it is typically done only by large firms and in special circumstances where the seller wants a more direct understanding of the prospective credit customer. The analysis would rely on information from outside sources, but would incorporate the firm's own analysis, including financial ratios development from the prospect's financial information. Since the consideration is whether to extend short-term credit, the focus of the analysis will be on the prospect's short-term debt-paying ability. Collecting Accounts Receivable A. The most significant risk faced in selling on credit is that a sale will be made, but not collected. Even with the best of screening processes, a business that sells on account can expect some loss from non-collection. The objective is to keep that post-sale loss to a minimum. To accomplish this, a firm must monitor its accounts receivable and take action where appropriate. 1. Monitoring accounts receivable—Collection management needs to monitor accounts receivable both in the aggregate and individually. Assessment of total accounts receivable is done with averages and ratios, including: a. Average collection period b. Day's sales in accounts receivable c. Accounts receivable turnover d. Accounts receivable to current or total assets e. Bad debt to sales 2. Collection action—When accounts are overdue, effective management requires action be taken, including:Prompt "past due" billingDunning letters (demands for payment) with increasingly serious demandsUse of collection agency International Receivables—Sales on account and other receivables from foreign customers can present special collection issues. International differences in law, culture and customs may increase uncertainty as to the timing and/or collectibility of amounts due. Those differences call for special consideration when making sales or incurring accounts receivable from foreign customers. A. Collection in Advance—Generally, collection in advance is not a reasonable expectation when making sales to foreign customers. Often, such customers are not comfortable with prepaying for goods or services not yet received. In addition, prepayment may not be feasible from a cash flow perspective. Therefore, purchases on-account are a common and expected option for foreign customers. B. Open-Account Sales—While sales on account (accounts receivable) are common and generally secure from abuse within certain countries, and especially in the U.S., when used for international sales they can present special collection problems. If payment is not made by a foreign buyer, the domestic seller may face the following problems: 1. Pursuing collection in a foreign country, which may be difficult and costly 2. Dealing with a foreign legal system that may have different standards for such matters 3. There may be an absence of documentation and other support needed to successfully pursue a claim. C. Mitigating Foreign Collection Problems—When sales are made on credit to foreign buyers, the most secure means of assuring collection is through the use of documentary letters of credit or documentary drafts. 1. These methods protect both the seller and the buyer. 2. These methods require that payment be made based on presentation of documents conveying title, and that specific procedures have been followed. 3. A documentary letter of credit adds a bank's promise to pay the exporter to that of the foreign buyer based on the exporter (domestic entity) complying with the terms and conditions of the letter of credit. 4. A documentary draft is handled like a check from a foreign buyer, except that title does not transfer to that buyer until the draft is paid. The benefits obtained would be the reduction in working capital required for carrying average accounts receivable of $30,000 multiplied by the opportunity cost of .15 = $4,500. The cost of the plan would be the reduced cash collected on accounts receivable of .02 times the 40% expected to take advantage of the discount (.02 x .40 = .008) times the credit sales, or .008 x $1,000,000 = $8,000. So, the net results would be an increase in cost of $4,500 - $8,000 = - $3,500. Although not clearly stated in the problem "facts," the decrease is intended to be average accounts receivable. As this is an actual AICPA exam question, the wording has been left unchanged. The collection status of individual accounts receivable can be monitored using an aging of accounts receivable. In an aging of accounts receivable, the amount due for each account is shown in terms of its due date. Typically, the total amount due for each individual account is separated into discrete time periods such as the amount not yet due, and the amounts 1 to 30 days overdue, 31 to 60 days overdue, 61 to 90 days overdue, and over 90 days overdue, but any time periods could be used. Since this information is provided for each customer with an account receivable, those that are overdue are identified and the seriousness of each delinquent account is known so that appropriate action can be taken. A number of other measures are useful in monitoring accounts receivable in the aggregate. The approaches commonly focus on a measure of the time that accounts receivable remain collectable, and include, among others: 1. Average collection period, which measure the number of days on average it takes to collect accounts receivable. This measure shows the average number of days it takes the firm to convert accounts receivable to cash. It is computed as: (Days in Year x Average Accounts Receivable)/Credit Sales for Period. 2. Accounts receivable turnover, which measures the number of times that total accounts receivable are incurred and collected (turnover) during a period. This measure indicates both the quality of credit policies and the efficiency of collection procedures. It is computed as: Net Credit Sales/Average Net Accounts Receivable. 3. Number of days' sales in average accounts receivable, which measures the average number of days required to collect receivables and is a variation of the average collection period (described above), but uses accounts receivable turnover in the computation. It is computed as: Number of business days in a fiscal year (e.g., 360 or 365)/Accounts Receivable Turnover.

Data Analytics

I. What Is Business Analytics? What Is Business Intelligence? A. Business analytics is "the science and art of discovering and analyzing patterns, identifying anomalies, and extracting other useful information in data" for application to a business issue or problem. (AICPA 2018) 1. Business analytics relies on advanced statistical and mathematical tools and models including visualization, regression, cluster analysis, network analysis, and machine learning techniques. 2. Examples of business analytics include audit data analytics (ADA), risk analysis and fraud detection, and the analysis of key performance indicators (KPIs). 3. Business analytics extends traditional analytical procedures to analyze new (often much larger) data sets and to apply advanced technologies and models to accounting and auditing problems. B. Business Intelligence 1. According to Gartner (2018), an IT consulting company, "Business Intelligence (BI) ... includes the applications, infrastructure and tools, and best practices, that enable access to and analysis of information to improve and optimize decisions and performance." 2. The term "business intelligence" is perhaps most closely linked to a specific software product, Microsoft Power BI, which enables users to structure data for the creation of dashboards, work sheets, and stories. Tableau is a very popular competing product. Both are excellent products. II. Business Analytics Skills, Processes, and Strategies A. Valued Data Analytics Skills and Tools. These include: 1. An "analytics" mind-set—that is, the ability to think critically and use information to exercise informed professional judgments. a. Ask good questions, transform data, apply analytic methods, communicate with stakeholders. 2. Principles of data cleaning, structuring, and information display. 3. Knowledge of data visualization and business analytics software. 4. Ability to use legacy (i.e., mature and declining use) tools, such as Excel and Access, to transition data for analysis using BI tools. B. Data Preparation and Cleaning—The ETL Process 1. Preparing data for analysis is often summarized as the extract, transform, and load (ETL) process. This consists of: a. Extract—get the data from a source. This could be simple, such as opening an Excel file, or complicated, such as writing Python code to "scrape" (pull) data from a website. b. Transform—apply rules, functions (e.g., sort) and cleansing operations to a data set. For example, this might include removing duplicate records and fixing errors in names and addresses in a pension system database. Excellent software exists for such tasks (e.g., Alteryx). c. Load—move the data to the target system. This can be as simple as uploading a flat file or as complicated as writing code to upload an exabyte (i.e., extremely large) data set to Hadoop (a software platform for extremely large data and analyses).Increasingly, the ETL process is automated. Automation often increases the accuracy of the process and the data. With many big data streams, automated ETL is the only feasible solution. C. Data Reliability 1. Business analytics must be based on reliable (i.e., accurate, true, and fair) data. Influences on data reliability include the nature and source of the data and the process by which it is produced: a. Nature of the data—quantitative data are likely to be more precise but less descriptive than are data that are stated in words. b. Source of the data—general ledger data from the client's ERP system is likely to be more reliable than is data on insider trading of stock futures. c. Process used to produce the data—data from an audited accounting system is likely to be more reliable than is data from an unreliable accounting system. Four Categories of Business Analytics A. Business analytics can be categorized by four questions: what is happening, why is it happening, what is likely to happen, and how should we act in response? 1. What is happening? This is called descriptive analytics. 2. Why did it happen? This is called diagnostic analytics. 3. What is likely to happen? This is called predictive analytics. 4. How should we act? This is called prescriptive analytics.

Inflation/Deflation

Inflation-results in higher interest rates Deflation-results in lower interest rates

Under Macroeconomics

Investing: A family has a new home constructed. A business acquires new production equipment. A business acquires inventory to expand its product line. Not considered Investing: An individual acquires shares of common stock. Acquisition of shares of common stock is not included in the definition of investment spending for macroeconomic analysis purposes. Such investments are considered saving.

MAKE SURE TO READ THE QUESTION AND DO WHAT THE QUESTION ASKS

MAKE SURE TO READ THE QUESTION AND DO WHAT THE QUESTION ASKS

aggregate demand and aggregate supply curves intersect

Potential GDP is the maximum amount of various goods and services an economy can produce at a given time with available technology and full utilization of economic resources. The point at which the aggregate demand and aggregate supply curves intersect is equilibrium--the real output (and price level) for an economy. The real output may be at, above, or below potential GDP (output).

Price ceilings

Price ceilings cause the price of a product to be artificially low resulting in decreased supply. The price is below the equilibrium price as indicated by this choice.

Prime costs

Prime costs are direct labor and direct materials.

Ratio Analysis

Profitability and Return Metrics—These metrics are derived from income statements and are measuring financial results for a period of time. Many of these metrics can be calculated with data found on externally published financial statements. A. Gross Margin or Gross Profit—This metric is heavily used by organizations that sell physical goods. The calculation is: Revenue less Cost of goods sold = Gross margin. This commonly used metric reflects profitability prior to the recognition of period expenses, such as selling, general, and administrative expenses. Gross margin is typically found on a GAAP-compliant "absorption" or "full cost" income statement. B. Contribution Margin—This metric requires an organization to segregate costs by behavior, meaning that costs are identified as either fixed or variable. The calculation is: Revenue - All variable costs = Contribution margin. This approach to measuring financial success is not GAAP-compliant and is used by many organizations for internal purposes. The types of income statements that feature contribution margin are often called variable cost P&Ls or direct costing P&Ls. Contribution Margin Ratio = Contribution Margin / Sales Revenue Operating Profit Margin = Operating Income / Sales Revenue Profit Margin or Return on Sales = Net Income / Net Sales Return on Investment = Net Income / Investment or Net Income / Total Assets The DuPont formula for ROI: Return on Sales (ROS) × Asset Turnover Where Profit Margin or ROS = Net Income / SalesCapital or Asset Turnover = Sales / Total Assets The DuPont approach to ROI separates ROI into two other metrics for analysis. The two metrics offer a separate measure of profitability as a percentage of sales and the efficiency with which assets were utilized to generate those sales. Multiplying the two metrics together results in ROI. Residual Income = Operating Income - Required Rate of Return (Invested Capital). Residual income (RI) measures the dollar amount of operating income that exceeds an internal capital charge. This metric is often used to evaluate different segments or a business or different capital expenditures. This metric provides a sense of scale when comparing segments of different sizes; return metrics do not provide this sense of scale. Asset Utilization Receivables Turnover = Sales on Account / Average Accounts Receivable Days' Sales in Receivables or Average Collection Period = Average Accounts Receivable / Average Sales per Day Inventory Turnover = Cost of Goods Sold / Average Inventory Fixed Asset Turnover = Sales / Average Net Fixed Asset Liquidity Current Ratio = Current Assets / Current Liabilities Quick Ratio or Acid Test Ratio = (Current Assets − Inventory) / Current Liabilities Solvency—Solvency ratios measure an organization's ability to survive in the long run. They do this by comparing a company's level of debt against earnings, assets, and equity. Solvency ratios are commonly used by lenders or investors to determine the ability of a company to pay back its debts. The most common measure of solvency is: (Net after-tax income + Noncash expenses) / (Short-term liabilities + Long-term liabilities) Debt Utilization (risk) Debt to Total Assets = Total Debt / Total Assets Debt to Equity = Total Debt / Total Owners' Equity Times Interest Earned—Operating Income / Interest Expense. Market Value Ratios Price Earnings (PE) Ratio—Market Price per Share / Earnings per Share Market-to-Book Ratio—Market Value per Share / Book Value per Share Additional Ratios Return on C/S Equity = Net Income - Preferred Dividend (obligation for the period only)/Average Common Stockholders' Equity (e.g., Beginning + Ending/2) EPS (Basic) = Net Income − Preferred Dividends (obligation for the period only) / Weighted Average Number of Shares Outstanding P/E Ratio (the "Multiple") = Market Price for a Common Share / Earnings per (Common) Share (EPS) C/S Dividend Payout Rate = Cash Dividends to Common Shareholders / Net Income to Common Shareholder C/S Dividend Payout Rate = Cash Dividends per Common Share / Earnings per Common Share Common Stock Dividend Yield = Dividend per Common Share / Market Price per Common Share

SOX 404

SOX Section 404 (Sarbanes-Oxley Act Section 404) mandates that all publicly-traded companies must establish internal controls and procedures for financial reporting and must document, test and maintain those controls and procedures to ensure their effectiveness.

IT Functions and Controls Related to People

Segregation of Functions—The functions in each area must be strictly segregated within the IT department. Without proper segregation of these functions, the effectiveness of additional controls is compromised. Applications Development—This department is responsible for creating new end-user computer applications and for maintaining existing applications. 1. Systems analysts—Responsible for analyzing and designing computer systems; systems analysts generally lead a team of programmers who complete the actual coding for the system; they also work with end users to define the problem and identify the appropriate solution. 2. Application programmers—Work under the direction of the systems analyst to write the actual programs that process data and produce reports. 3. New program development, and maintenance of existing programs, is completed in a "test" or "sandbox" environment using copies of live data and existing programs rather than in the "live" system. Systems Administration and Programming—This department maintains the computer hardware and computing infrastructure and grants access to system resources. 1. System administrators—The database administrator, network administrator, and web administrators are responsible for management activities associated with the system they control. For example, they grant access to their system resources, usually with usernames and passwords. System administrators, by virtue of the influence they wield, must not be permitted to participate directly in these systems' operations. 2. System programmers—Maintain the various operating systems and related hardware. For example, they are responsible for updating the system for new software releases and installing new hardware. Because their jobs require that they be in direct contact with the production programs and data, it is imperative that they are not permitted to have access to information about application programs or data files. 3. Network managers—Ensure that all applicable devices link to the organization's networks and that the networks operate securely and continuously. 4. Security management—Ensures that all components of the system are secure and protected from all internal and external threats. Responsibilities include security of software and systems and granting appropriate access to systems via user authentication, password setup, and maintenance. 5. Web administrators—Operate and maintain the web servers. (A web server is a software application that uses the hypertext transfer protocol (recognized as http://) to enable the organization's website. 6. Help desk personnel—Answer help-line calls and emails, resolve user problems, and obtain technical support and vendor support when necessary. D. Computer Operations—This department is responsible for the day-to-day operations of the computer system, including receipt of batch input to the system, conversion of the data to electronic media, scheduling computer activities, running programs, etc. 1. Data control—This position controls the flow of all documents into and out of computer operations; for batch processing, schedules batches through data entry and editing, monitors processing, and ensures that batch totals are reconciled; data control should not access the data, equipment, or programs. This position is called "quality assurance" in some organizations. 2. Data entry clerk (data conversion operator)—For systems still using manual data entry (which is rare), this function keys (enters) handwritten or printed records to convert them into electronic media; the data entry clerk should not be responsible for reconciling batch totals, should not run programs, access system output, or have any involvement in application development and programming. 3. Computer operators—Responsible for operating the computer: loading program and data files, running the programs, and producing output. Computer operators should not enter data into the system or reconcile control totals for the data they process. (That job belongs to Data Control.) 4. File librarian—Files and data not online are usually stored in a secure environment called the file library; the file librarian is responsible for maintaining control over the files, checking them in and out only as necessary to support scheduled jobs. The file librarian should not have access to any of the operating equipment or data (unless it has been checked into the library). E. Functions in the three key functions (i.e., applications development, systems administration and programming, computer operations—should be strictly segregated. (This is a bit like the "cannibals and missionaries" problem from computer science and artificial intelligence.) In particular: 1. Computer operators and data entry personnel—Should never be allowed to act as programmers. 2. Systems programmers—Should never have access to application program documentation. 3. Data administrators—Should never have access to computer operations ("live" data). 4. Application programmers and systems analysts—Should not have access to computer operations ("live" data). 5. Application programmers and systems analysts—Should not control access to data, programs, or computer resources. Personnel Policies and Procedures—The competence, loyalty, and integrity of employees are among an organization's most valuable assets. Appropriate personnel policies are critical in hiring and retaining quality employees. A. Hiring Practices—Applicants should complete detailed employment applications and formal, in-depth employment interviews before hiring. When appropriate, specific education and experience standards should be imposed and verified. All applicants should undergo thorough background checks and verification of academic degrees, work experience, and professional certifications, as well as searches for criminal records. B. Performance Evaluation—Employees should be evaluated regularly. The evaluation process should provide clear feedback on the employee's overall performance as well as specific strengths and weaknesses. To the extent that there are weaknesses, it is important to provide guidance on how performance can be improved. C. Employee Handbook—The employee handbook, available to all employees, should state policies related to security and controls, unacceptable conduct, organizational rules and ethics, vacations, overtime, outside employment, emergency procedures, and disciplinary actions for misconduct. D. Competence—COSO requires that "Management ... [should] specify the competence levels for particular jobs and to translate those levels into requisite knowledge and skills. These actions help ensure that competent, but not over-qualified employees serve in appropriate roles with appropriate responsibilities." E. Firing (Termination)—Clearly, procedures should guide employee departures, regardless of whether the departure is voluntary or involuntary; it is especially important to be careful and thorough when dealing with involuntary terminations of IT personnel who have access to sensitive or proprietary data. In involuntary terminations, the employee's username and keycard should be disabled before notifying the employee of the termination to prevent any attempt to destroy company property. Although this sounds heartless, after notification of an involuntary termination, the terminated employee should be accompanied at all times until escorted out of the building. F. Other Considerations—Recruiting and retaining highly qualified employees is an important determinant of organizational success. Ensuring that an organization has training and development plans, including training in security and controls, is essential both to employee retention, and to creating a system of internal control.

The FIFO method (Process Costing)

Solving for the EU of production will depend on which method is being used: weighted average or first in, first out (FIFO). A. The FIFO method uses three categories, separating (1) beginning inventory from (2) the new or current period completed work (called "units started and finished"), while (3) ending inventory (at least for EU) is treated the same as with weighted average. You should notice several things about the format presented just above. (image) a. Physical units are the same (in total) regardless of the EU method used. b. Exam questions stating percentage of completion for the ending inventory EU calculation are likely to be communicated in terms of how much dollar-equivalent work was completed in the current period. However, the percentage of completion for beginning inventory is typically stated in terms of how much dollar-equivalent work was completed in the prior period. This is often confusing to candidates. To make this easier to understand, think about the fact that the FIFO method is interested in calculating current-period information separately from prior-period information. As such, FIFO wants to know the EU of work done on the beginning inventory this current period. That is why you are required to use the complement (100% - 10%) in the calculation of beginning inventory EU. c. Regarding the percentage of completion multiplier: For the weighted average method, the goods completed amount will always be 100% complete. For the FIFO method, the units started and finished will always be 100% complete. Ending inventory will be the same equivalent units amount for both methods. Physical units will, of course, be the same regardless of the method used to calculate equivalent units. d. The ultimate goal in calculating equivalent units is to segregate the WIP inventory account between (1) work finished and transferred out and (2) ending WIP inventory. e. A T-account can be used to check your work and to help display how these two pieces of WIP are relevant. The following T-account provides an example. We know how the physical units are divided from the 50,000 units available shown on the left below, and we will use the EU concept to determine (1) the cost of units transferred out and (2) the value of the ending WIP inventory shown in the T-account. Step 2—Determine the cost per equivalent unit. First determine the total costs to account for; the following costs are usually accumulated during the period: a. Beginning WIP costs—The total costs of production (material, labor, and overhead) that were allocated to the production units during previous periods. b. Transferred-in costs—The costs of production (material, labor, and overhead) from previous departments that flow with the production items from department to department. c. Current-period costs—The transfer costs and costs of production (material, labor, and OH) added to the WIP during the current period. d. Total costs to account for—The total of the beginning WIP costs, the transfer-in costs, and the current period costs. Total costs must be allocated to ending WIP inventory and to FG inventory at the end of the period. FIFO cost flow—Under the FIFO cost flow assumption, the costs associated with prior period work on beginning WIP inventory are transferred to FG in their entirety. The current period (equivalent) unit cost is determined by dividing the current period costs (including transfer-in costs, if any) by the EU added to production during the current period. This answer is correct. The FIFO method determines equivalent units of production (EUP) based on the work done in the current period, which includes the work necessary to complete beginning work in process (BWIP) and the work performed on the units started in the current period. The units started during the period are either completed or they remain as ending work in process (EWIP); thus, the EUP relative to units started consists of the number of units completed plus the work done on the EWIP. The EUP for FIFO can be calculated in various ways, two of which are presented below. Work to complete BWIP +Units started and completed +Work to date on EWIP Equivalent units (FIFO) Work to complete BWIP +Units started -Work to complete EWIP Equivalent units (FIFO) Supporting Calculations Physical schedule Units to account for: Beginning WIP 14,000 Trans-in Dept. 1 76,000 Total units to account for 90,000 Units accounted for: Units completed (80,000) From beginning WIP (90%) 14,000 1,400 From current production 66,000 66,000 (80,000 - 14,000) Total units completed 80,000 Spoiled 1,500 1,500 Ending WIP (60%) 8,500 5,100 Total units accounted for 90,000 74,000

business analytics

The business strategy, goals, and mission should drive business analytics. Business analytics must support and integrate with company strategy.

Ratios

Total assets = Sales/Asset turnover. As such, $500,000/2.5 = $200,000. Pretax profit = (Required rate of return × Total assets) + Residual income. As such: (6% × 200,000) + $5,000 = $17,000 Return on investment = Pretax profit / Total assets As such: $17,000 / $200,000 = 8.5% Return on sales = Pretax profit / Sales As such: $17,000 / $500,000 = 3.4%

closing journal entries

Transfer balances in temporary accounts to retained earnings.

REMINDER

When looking for the NPV make sure you take the residual value of the equipment and take the NPV of that value. When depreciation is involved add it back, only take out the taxed portion. PROFITABILITY INDEX (PI) CALCULATION PI = NPV/Initial Cost NPV = $90,000 (Given) Initial Cost = $800,000 (Given) PI = $90,000/$800,000PI = 0.1125

Using multiple products for the CM

Wren Co. manufactures and sells two products with selling prices and variable costs as follows: A B Selling price $18.00. $22.00 Variable costs. 12.00. 14.00 Wren's total annual fixed costs are $38,400. Wren sells four units of A for every unit of B. If the operating income last year was $28,800, what was the number of units Wren sold? Adding an operating income of $28,800 to fixed costs of $38,400 = contribution margin (CM) of $67,200. Total CM for A = $6, while CM for B = $8. Since the ratio of units in the sales mix is 4 parts A to 1 part B, the proper equation would be 6(4/5)Q + 8(1/5)Q = $67,200; thus, Q = 10,500.

Reminder

You're doing a great job! Keep going Also, make sure not to round the number before the end. So if it is 1.80%, then use that to calculate what you need, then round at the end of the answer.

Big Data

a broad term for datasets so large or complex that traditional data processing applications are inadequate.

data mart

a data collection, smaller than the data warehouse, that addresses the needs of a particular department or functional area of the business

Aggregate Demand

Aggregate Demand Curve—At the macroeconomic (economy) level, demand measures the total spending of individuals, businesses, governmental entities, and net foreign spending on goods and services at different price levels. The demand curve that results from plotting the aggregate spending (AD) is negatively sloped and can be represented as: look at image Components of Aggregate Demand—Aggregate demand is the total spending by individual consumers (consumption spending), businesses on investment goods, by governmental entities, and foreign entities on net exports. Each is considered in the following subsections. Consumer Spending—Spending on consumable goods accounts for about 70% of total spending (aggregate demand) in the United States. Note that the amount of 70% can fluctuate. Personal income and the level of taxes on personal income are the most important determinants of consumption spending. Personal income less related income taxes determines individual income available for spending, called "personal disposable income." The relationship between consumption spending (CS) and disposable income (DI) is the consumption function. Graphically, the consumption function can be plotted as a positively sloped curve. Consumption/Saving Ratios—Several ratios are used to measure the relationship between consumption spending and disposable income: 1. Average ratios a. Average propensity to consume (APC): Measures the percentage of disposable income spent on consumption goods. b. Average propensity to save (APS): Measures the percentage of disposable income not spent, but rather saved. c. APC + APS = 1 (because each measurement is the inverse of the other) 2. Marginal ratios a. Marginal propensity to consume (MPC): Measures the change in consumption as a percentage of a change in disposable income. b. Marginal propensity to save (MPS): Measures the change in savings as a percentage of a change in disposable income. c. MPC + MPS = 1 (because each measure is the reciprocal of the other) Investment A. In the macroeconomic context, investment includes spending on: 1. Residential construction; 2. Nonresidential construction; 3. Business durable equipment; and 4. Business inventory. B. The level of spending on these investment goods is influenced by a number of factors, including: 1. Real interest rate (nominal rate less rate of inflation) 2. Demographics (i.e., make-up of the population) 3. Consumer confidence 4. Consumer income and wealth 5. Government actions (tax rates, tax incentives, governmental spending, etc.) 6. Current vacancy rates 7. Level of capacity utilization 8. Technological advances 9. Current and expected sales levels C. Over time, the most significant of these factors is the interest rate. Higher interest rates are associated with lower levels of investment spending, and lower interest rates are associated with higher levels of investment spending. The graphic representation (an investment demand [ID] curve) shows the negative relationship. Government Spending and Fiscal Policy A. Government spending increases aggregate spending (and demand) in the economy. Much of that spending comes about as a result of the reduced disposable income available to consumers due to taxes imposed to finance government spending. While taxes on income reduces aggregate demand, and government spending and transfer payments (e.g., unemployment payments, social security, etc.) increase demand, there will not be equal "offsetting" for a period because the two events—government taxing and government spending—are not absolutely interdependent, especially in the short run. B. Consequently, government can directly affect aggregate demand by changing tax receipts, government expenditures, or both. Intentional changes by the government in its tax receipts and/or its spending, which are implemented in order to increase or decrease aggregate demand in the economy, is called discretionary fiscal policy. The following chart summarizes possible fiscal policy initiatives to increase or decrease demand in the economy (ceteris paribus): look at this section Net Exports/Imports 1. When net exports are positive (exports greater than imports), aggregate demand is increased. 2. When net exports are negative (exports less than imports), aggregate demand is decreased. Relative levels of income and wealth—The higher the income and wealth, the greater the spending including on imports. In the U.S., imports tend to increase with increases in income, whereas exports depend on the income of foreign buyers. Relative value of currencies—A weaker currency stimulates exports and makes imports more expensive (and vice versa). Relative price levels—The higher the price level, the more costly are goods/services for foreign buyers. Import and export restrictions and tariffs—The greater the restrictions and tariffs, typically, the lower the level of imports and/or exports. Relative inflationary rates—The higher the inflation rate, the higher the cost of inputs, causing outputs to be more costly and less competitive in the world market. Interest Rate Factor—Generally, the higher the price level, the higher the interest rate. As the interest rate increases, interest-sensitive spending (e.g., new home purchases, business investment, etc.) decreases. Wealth-Level Factor—As price levels (and interest rates) increase, the value of financial assets may decrease. As wealth decreases, so also may spending decrease. Foreign Purchasing Power Factor—As the domestic price level increases, domestic goods become relatively more expensive than foreign goods. Therefore, spending on domestic goods decreases and spending on foreign goods increases. Aggregate Demand Curve Shift 1. Personal taxes (e.g., income taxes) a. Increases in personal taxes reduce personal disposable income and, therefore, reduce aggregate demand. b. Decreases in personal taxes increase personal disposable income and, therefore, increase aggregate demand. 2. Consumer confidence a. Increased confidence that the economy will perform favorably going forward results in consumer willingness to spend on consumer goods and services, thereby, increasing aggregate demand. b. Decreased confidence that the economy will perform favorably going forward, or uncertainty about the future of the economy, results in consumers not being willing to spend on consumer goods and services, thereby decreasing aggregate demand. 3. Technological advances a. New technology tends to engender increased spending by consumers and investment by business and government, resulting in increased aggregate demand. b. The lack of new technology tends to result in deferring new investment by business and government, resulting in a decrease in aggregate demand. 4. Corporate taxes (e.g., income taxes, franchise taxes, etc.) a. Increases in corporate taxes reduce corporate funds available for investment and distribution as dividends to shareholders, which results in decreases in both business demand and shareholder (consumer) demand. b. Decreases in corporate taxes increase the corporate funds available to business for investment and funds available for distribution as dividends to shareholders, both of which tend to increase aggregate demand. 5. Interest rates a. Increases in interest rates increase the cost of capital and borrowing, which results in reduced business investment and reduced consumer spending for durable goods (e.g., automobiles, major appliances, etc.), both resulting in decreased aggregate demand. b. Decreases in interest rates decrease the cost of capital and borrowing, which results in increased business investment and increased consumer spending, both resulting in increased aggregate demand. 6. Government spending a. An increase in government spending increases aggregate demand for goods/services. b. A decrease in government spending decreases aggregate demand for goods/services. 7. Exchange rates/net exports a. A weakening of a country's currency relative to the currencies of other countries will cause the goods of that country to be relatively less expensive, which will cause exports to increase and imports to decrease, both of which increase net exports and increase aggregate demand. b. A strengthening of a country's currency relative to the currencies of other countries will cause the goods of that country to be relatively more expensive, which will cause exports to decrease and imports to increase, both of which decrease net exports and decrease aggregate demand. 8. Wealth changes a. Increases in wealth (e.g., a run-up in stock prices) foster increases in aggregate demand. b. Decreases in wealth foster decreases in aggregate demand. B. Notice that government can act so as to effect increases or decreases in many of these factors (i.e., change tax rates, government spending, etc.). C. Multiplier Effect 1. Factors that cause a shift in aggregate demand have a ripple effect through the whole economy. For example, an increase in investment spending by business results in certain increases in personal disposable income, which further spurs demand. This cascading effect on demand is called "the multiplier effect." Simply put, a change in a single factor that causes a change in aggregate demand will have a multiplied effect on aggregate demand. 2. The multiplier effect is caused by and can be calculated using the marginal propensity to consume. Recipients of additional income will spend some portion of that new income— their marginal propensity to consume a portion—which will provide income to others, a portion of which they will spend, and so on. 3. The extent of the multiplier effect can be measured as: Multiplier Effect = Initial Change in Spending × (1/(1 − MPC)) 4. Because MPC is the reciprocal of MPS, the element (1 − MPC) in the above equation is the same as (MPS). Thus, the equation can be simplified as: Multiplier Effect = Initial Change in Spending × (1/MPS) Marginal propensity to consume measures the change in consumption spending as the percentage of change in disposable income. Between Year 1 and Year 2, Roy's spending increased from $90,000 to $150,000, an increase of $60,000. During the same period, his disposable income increased from $100,000 to $200,000, an increase of $100,000. Therefore, Roy's marginal propensity to consume for Year 2 was $60,000/$100,000 = .60. In order to reach full employment, gross domestic product needs to increase by $.1 trillion (i.e., $1.3 trillion @ full employment - $1.2 trillion @ current = $.1 trillion shortfall). Because of the multiplier effect, additional government expenditures needed to increase gross domestic product by that amount is $20 billion. The formula is: Multiplier Effect = Initial Change in Spending × (1/(1 - MPC)) Where: Initial Change in Spending = X, and substituting known values: $.1T = X × [1/(1-.8)] [NOTE: .1T = 100B.]; therefore: $100B = X × [1/.2]$100B = X × 5X = $100B/5X = $20B A $20B increase in government expenditures would result in $100 billion increase in gross domestic product. If an increase in government purchases of goods and services of $20 billion causes equilibrium GDP to rise by $80 billion, and if total taxes and investment are constant, the marginal propensity to consume out of disposable income is The multiplier refers to the fact that an increase in spending has a multiplied effect on GDP. The effect of the multiplier can be estimated using the following formula: $ Multiplier Effect = $ Initial Change in Spending × [1/(1− MPC)]. Substituting values gives: $80B = $20B × [1/(1− MPC)], therefore: [1/(1− MPC)] = 4. [Proof: $80B = $20B × 4]. If [1/(1 − MPC)] = 4, and (1 − MPC) = MPS, then (1/MPS) = 4, or MPS = 1/4 = .25, and MPC = (1.00 − .25) = MPC = .75.

GDP deflator

Nominal GDP/Real GDP × 100

Internal Rate of Return

the discount rate that makes the NPV of an investment zero

ERM

(enterprise risk management) The comprehensive process of evaluating, measuring, and mitigating the many risks that pervade an organization. The organization should review its ERM practices to better understand why it misestimated the risks related to the new product.

Bill of Materials

A document that shows the quantity of each type of direct material required to make a product.

data warehouse

A logical collection of information - gathered from many different operational databases - that supports business analysis activities and decision-making tasks This answer is correct because a data warehouse is an approach to online analytical processing that combines data into a subject-oriented, integrated collection of data used to support management decision-making processes.

Breakeven analysis

In a graph with the Y-axis being cost and the X-axis being activity level, total variable cost begins at the origin and is an upward sloping line. The slope of this curve is variable cost per unit of activity and is a constant. If variable cost were not assumed to be a constant in the relevant range, breakeven analysis would not be possible. Unit variable costs are constant

Absorption and Direct Costing

Remember when doing these types of questions, make sure to take the per unit price of the amount produced and then multiply by the actual units sold. For Admin costs, don't take the amount of Admin expense out per unit. Make sure the amounts are listed in order from large to small amounts. Absorption Costing Income Statement 1. Sales 2. COGS: Product costs(variable costs)-Direct materials, labor, variable and 3. COGS: Fixed costs-fixed manufacturing overhead(take the per unit price x total actual units sold) 4. Gross Margin 5. Variable Selling and admin expense 6. Fixed Selling and admin expense 7. Operating Income Direct Costing Income Statement: 1. Sales 2. COGS: Product costs(Variable costs)- Direct materials, labor, and variable mfg. overhead. 3. Variable Selling and admin expense 4. Contribution Margin 5. Fixed mfg OH (take the total, do not take the per unit price x total actual units sold) 6. Fixed Selling and Admin exp 7. Operating Income To determine the inventory valuation, subtract the 35,000 units sold from the 40,000 units produced to determine that the ending inventory has 5,000 units. The absorption value includes all production costs: Per unit, they are $12.00 for direct materials, $2.00 for direct labor, $0.50 for variable overhead, and $1.00 for fixed overhead. The total absorption cost per unit is $15.50. 5,000 units × $15.50 = $77,500.00. Using variable costing, the total cost per unit = $12.00 + $2.00 + $0.50 = $14.50 per unit. 5,000 units × $14.50 = $72,500.00. (take the amount before the gross margin or contribution margin per unit price) Absorption Costing (also known as full costing): Assigns all three factors of production (direct material, direct labor, and both fixed and variable manufacturing overhead) to inventory. Direct Costing (also known as variable costing): Assigns only variable manufacturing costs (direct material, direct labor, but only variable manufacturing overhead) to inventory. 1.Absorption costing is required for external reporting purposes. This is currently true for both external financial reporting and reporting to the IRS. 2. Direct costing is frequently used for internal decision-making but cannot be used for external reporting. 1. Variable manufacturing costs a. Direct material—Materials that are feasibly traceable to the final product b. Direct labor—Wages paid to employees involved in the primary conversion of direct materials to finished goods c. Variable factory overhead—Variable manufacturing costs other than direct material and direct labor (e.g., supplies, utilities, repairs, etc.) 2. Fixed manufacturing costs a. Fixed factory overhead—Fixed manufacturing costs (e.g., depreciation on factory buildings and equipment, manufacturing supervisory salaries and wages, property taxes and insurance on the factory, etc.) 3. Variable selling and administrative costs a. Selling costs—Variable costs associated with selling the good or service (e.g., freight out, sales commissions, etc.) b. Administrative costs—Variable costs associated with the administrative functions of an organization (e.g., office supplies, office utilities, etc.) 4. Fixed selling and administrative costs a. Selling costs—Fixed costs associated with selling the good or service (e.g., sales representatives' salaries, depreciation on sales-related equipment, etc.) b. Administrative costs—Fixed costs associated with the administrative functions of an organization (e.g., officers' salaries; depreciation, property taxes, and insurance on office building, advertising, etc.) B. The principal difference between the absorption model and the direct costing model rests on which costs are assigned to products: 1. The absorption model assigns all manufacturing costs to products. 2. The direct model assigns only variable manufacturing costs to products. Absorption Costing Income Statement—The absorption costing income statement lists its product costs, including the fixed manufacturing costs, "above the line" and subtracts the product costs from Sales to calculate Gross Margin. The absorption costing income statement lists costs by whether they are manufacturing or not. All manufacturing costs are listed together and are subtracted from Sales to calculate Gross Margin. All non-manufacturing costs (operating expenses) are then listed together and are subtracted from Gross Margin to get Operating Income. Direct Costing Income Statement—The direct costing income statement lists costs by behavior (variable or fixed). All variable costs are listed together and are subtracted from Sales to calculate Contribution Margin. All fixed costs are then listed together and are subtracted from Contribution Margin to get Operating Income. Note Although variable selling and administrative costs are listed along with the variable manufacturing costs (direct material, direct labor, variable manufacturing overhead) and are subtracted from sales to arrive at the contribution margin, the variable selling and administrative costs are not product costs and are not considered part of Cost of Goods Sold. Instead, they are always recognized as a period cost and are completely expensed each period. Fixed Manufacturing Costs are treated as period costs. All selling and administrative costs are treated as period costs regardless of whether the absorption or variable costing method is used. Effect of Product Costing Model on Operating Income—Absorption costing and direct costing assign different costs to inventory. Since direct costing does not include fixed manufacturing costs as part of product cost, the inventory valuation under absorption costing will always be greater than the inventory valuation under direct costing. From an external reporting point of view, direct costing understates assets on the balance sheet. Income Reconciliation—Explaining the difference in income between absorption costing and variable costing. 1. Because absorption costing and direct costing assign different costs to products, there may be a difference in income reported under the two methods. However, absorption costing and direct costing do not always produce different incomes. When the number of units sold equals the number of units produced, absorption costing and direct costing produce identical incomes. (Note: This assumes that fixed cost per unit remains the same from one period to the next.) 2. The difference between the two measures of income is due to the different treatment of fixed manufacturing costs. Direct costing deducts all fixed manufacturing costs as a lump-sum period cost when calculating income. Absorption costing assigns fixed manufacturing costs to products and therefore only deducts fixed manufacturing costs when the units are sold. 3. Depending on whether the units sold are greater than or less than the units produced, the fixed manufacturing overhead deducted under absorption costing may be greater or less than the fixed manufacturing overhead deducted under direct costing. Look at examples on this section Absorption costing includes both variable and fixed manufacturing costs as product costs. Direct costing includes only variable manufacturing costs as product cost and expenses fixed manufacturing costs as a period expense. In this case, absorption costing includes $20,000 of fixed manufacturing costs (1,000 x $20) in ending inventory while direct costing expenses the full amount of fixed manufacturing costs. Pretax income is consequently $20,000 higher for absorption costing. Current ratio = current assets/current liabilities. Return on stockholders' equity = net income/average owners' equity. Absorption costing allocates both variable and fixed manufacturing costs to inventory. Variable costing assigns only variable manufacturing cost to inventory and expenses fixed manufacturing overhead as a period cost. Therefore, ending inventory, and thus, current assets, are higher under absorption costing by the amount of fixed overhead allocated to ending inventory. The current ratio under absorption costing is, therefore, higher than under variable costing. Income in the current period is the same under both absorption costing and variable costings because the fixed overhead allocation rate has not changed, and ending inventory quantities have not changed. Therefore, total expenses recognized for the life of the firm for absorption costing are less than for variable costing by the amount of fixed overhead remaining in those 5,000 units at the end of Year 2. Thus, retained earnings are higher for absorption costing, causing the denominator of return on stockholders' equity to be greater, and finally causing the ratio to be smaller for absorption costing. Absorption costing includes fixed manufacturing costs as part of product costs; direct costing expenses fixed manufacturing costs as a period expense. Because of this, inventory valuation under absorption costing is more than inventory valuation under direct costing. When a firm sells more than it produces, it must use some of its existing inventory. Since absorption costing has a higher inventory valuation, the cost of goods sold under absorption costing will be higher (and income lower) than under direct costing.

At what level of worker employment does NoCo reduce its average sales price in order to sell all units it produces?

When reducing average sales price, look at the first sales price, then look at all of the other sales prices after and see when the sales prices falls per unit, that is when the average sales price is falling for example, worker 1, 1,000 units, sales $200,000 worker 2, 1,500 units, sales 250,000 the average sales price would diminish at worker 2 since the average unit price would go from $200 to $167.

COSO

Committee of Sponsoring Organizations of the Treadway Commission Framework for enterprise internal controls (control-based approach)

expert system

Computerized advisory programs that imitate the reasoning processes of experts in solving difficult problems

Input and Origination Controls

Input and origination controls—Control over data entry and data origination process Processing and file controls—Controls over processing and files, including the master file update process Output controls—Control over the production of reports Input Controls—(Also known as programmed controls, edit checks, or automated controls.) Introduction—Ensure that the transactions entered into the system meet the following control objectives: 1. Valid—All transactions are appropriately authorized; no fictitious transactions are present; no duplicate transactions are included. 2. Complete—All transactions have been captured; there are no missing transactions. 3. Accurate—All data has been correctly transcribed, all account codes are valid; all data fields are present; all data values are appropriate. Here are some important input controls: 1. Missing data check—The simplest type of test available: checks only to see that something has been entered into the field. 2. Field check (data type/data format check)—Verifies that the data entered is of an acceptable type—alphabetic, numeric, a certain number of characters, etc. 3. Limit test—Checks to see that a numeric field does not exceed a specified value; for example, the number of hours worked per week is not greater than 60. There are several variations of limit tests: a. Range tests—Validate upper and lower limits; for example, the price per gallon cannot be less than $4.00 or greater than $10.00. b. Sign tests—Verify that numeric data has the appropriate sign (positive or negative); for example, the quantity purchased cannot be negative. 4. Valid code test (validity test)—Checks to make sure that each account code entered into the system is a valid (existing) code; this control does not ensure that the code is correct, merely that it exists. a. In a database system, this is called referential integrity (e.g., an important control to prevent the creation of fake entities, vendors, customers, employees). 5. Check digit—Designed to ensure that each account code entered into the system is both valid and correct. The check digit is a number created by applying an arithmetic algorithm to the digits of a number, for example, a customer's account code. The algorithm yields a single digit appended to the end of the code. Whenever the account code (including check digit) is entered, the computer recalculates the check digit and compares the calculated check digit to the digit entered. If the digits fail to match, then there is an error in the code, and processing is halted. a. A highly reliable method for ensuring that the correct code has been entered b. A parity check (from the "Processing, File, and Output Controls" lesson) is one form of a check digit 6. Reasonableness check (logic test)—Checks to see that data in two or more fields is consistent. For example, a rate of pay value of "$3,500" and a pay period value of "hourly" may be valid values for the fields when the fields are viewed independently; however, the combination (an hourly pay rate of $3,500) is not valid. 7. Sequence check—Verifies that all items in a numerical sequence (check numbers, invoice numbers, etc.) are present. This check is the most commonly used control for validating processing completeness. 8. Key verification—The rekeying (i.e., retyping) of critical data in the transaction, followed by a comparison of the two keyings. For example, in a batch environment, one operator keys in all of the data for the transactions while a second operator rekeys all of the account codes and amounts. The system compares the results and reports any differences. Key verification is generally found in batch systems, but can be used in online real-time environments as well. As a second example, consider the process required to change a password: enter the old password, enter the new password, and then re-enter (i.e., key verify) the new password. This is a wasteful procedure that we all hope dies soon. 9. Closed loop verification—Helps ensure that a valid and correct account code has been entered; after the code is entered, this system looks up and displays additional information about the selected code. For example, the operator enters a customer code, and the system displays the customer's name and address. Available only in online real-time systems. 10. Batch control totals—Manually calculated totals of various fields of the documents in a batch. Batch totals are compared to computer-calculated totals and are used to ensure the accuracy and completeness of data entry. Batch control totals are available, of course, only for batch processing systems or applications. a. Financial totals—Totals of a currency field that result in meaningful totals, such as the dollar amounts of checks. (Note that the total of the hourly rates of pay for all employees, e.g., is not a financial total because the summation has no accounting-system meaning.) b. Hash totals—Totals of a field, usually an account code field, for which the total has no logical meaning, such as a total of customer account numbers in a batch of invoices. c. Record counts—Count of the number of documents in a batch or the number of lines on the documents in a batch. 11. Preprinted forms and preformatted screens—Reduce the likelihood of data entry errors by organizing input data logically: when the position and alignment of data fields on a data entry screens matches the organization of the fields on the source document, data entry is faster, and there are fewer errors. 12. Default values—Pre-supplied (pre-filled) data values for a field when that value can be reasonably predicted; for example, when entering sales data, the sales order date is usually the current date; fields using default values generate fewer errors than other fields. 13. Automated data capture—Use of automated equipment such as bar code scanners to reduce the amount of manual data entry; reducing human involvement reduces the number of errors in the system.

Issues at National Level

Sociopolitical Issues A. It is often argued that international trade causes or exacerbates certain domestic social and economic problems, including: 1. Unemployment resulting from the direct or indirect use of "cheap" foreign labor 2. Loss of certain basic manufacturing capabilities 3. Reduction of industries essential to national defense 4. Lack of domestic protection for start-up industries B. Political responses to such concerns often result in protectionism in the form of: 1. Import quotas, which restrict the quantity of goods that can be imported 2. Import tariffs, which tax imported goods and thereby increase their cost 3. Embargo, which is a partial or complete ban on trade (imports/exports) or other commercial activity with one or more other countries 4. Foreign exchange controls, which are government controls on the purchase or sale of foreign currencies by residents and/or on the purchase or sale of the domestic currency by nonresidents, including: a. Barring the use of foreign currencies in the country b. Barring the possession of foreign currencies by residents c. Restricting currency exchange to government-run or government-approved exchanges d. Government imposed fixed exchange rates C. Protectionism—Such forms of protectionism benefit some parties while harming others: 1. Parties benefited a. Domestic producers—Retain market and can charge higher prices b. Federal government—Obtains revenue through tariffs 2. Parties harmed a. Domestic consumers—Pay higher prices and may have less choice of goods b. Foreign producers—Loss of market D. Such forms of protectionism are generally inappropriate because they are based on economic misconceptions or because there are more appropriate fiscal and monetary policy responses. Balance of trade is the difference between the monetary value of imports and exports, which is a part of a country's current accounting in its balance-of-payments accounts. (See IV below.) Trade surplus = Exports > Imports Trade deficit = Exports < Imports Dumping Issue A. In the context of international economics, "dumping" is the sale of a product in a foreign market at a price that is either a lower price than is charged in the domestic market or lower than the firm's production cost. B. Dumping may have an adverse effect on the producers of the good that is dumped in the country that receives the good because the price charged for dumped goods may be less than the cost of production in the import country. C. Under World Trade Organization (WTO) policy, dumping is not considered illegal competition unless the importing country can demonstrate the negative effects on domestic producers. D. Importing nations often counter dumping by imposing quotas and/or tariffs on the dumped product, which has the effect of limiting the quantity or increasing the cost of the dumped good. The objective of dumping is to increase market share in a foreign market by driving out competition and thereby create a monopoly situation where the exporter will be able to unilaterally dictate price and quality of the product. Balance of Payments Issues A. The U.S. balance of payments is a summary accounting of all U.S. transactions with all other nations for a calendar year. The U.S. reports international activity in three main accounts: 1. Current account—Reports the dollar value of amounts earned from the export of goods and services, amounts spent on import of goods and services, income from investments, government grants to foreign entities, and the resulting net (export or import) balance. 2. Capital account—Reports the dollar amount of capital transfers and the acquisition and disposal of non-produced, non-financial assets. Thus, it includes inflows from investments and loans by foreign entities, outflows from investments and loans U.S. entities made abroad, and the resulting net balance. Examples include funds transferred in the purchase or sale of fixed assets, natural resources and intangible assets. 3. Financial account—Reports the dollar amount of U.S.-owned assets abroad, foreign-owned assets in the United States, and the resulting net balance. It includes both government assets and private assets, and both monetary items (e.g., gold, foreign securities) and non-monetary items (e.g., direct foreign investments in property, plants and equipment). The import of assets from foreign countries would be accomplished by the transfer of capital from the United States to sellers in foreign countries that would decrease the capital accounting, which would reduce the balance of payments for the United States.

Conversion costs

direct labor and overhead expenses

Basis risk

is the risk of loss from ineffective hedging activities.

Internal disk labels are physically read by

software

Mode

the most frequently occurring score(s) in a distribution

Price discrimination

As the term implies, price discrimination is a pricing strategy that charges customers in different market segments different prices for the same or largely the same product or service. When a market has distinct segments (i.e., buyers that are of fairly distinct types), suppliers are better able to charge different prices to different buyer types (market segments) for the same or essentially the same good or service. For example, it is common for pharmaceutical companies to charge different prices for the same drug to different geographic market segments. As a consequence, U.S. consumers pay almost twice what Europeans pay for the same drugs.

Tests of controls

Audit procedures performed to test the operating effectiveness of controls in preventing, or detecting and correcting, material misstatements at the relevant assertion level.

More ratios

Average Collection Period=Days in Business Year/Accounts Receivable Turnover =360 (Given)/Accounts Receivable Turnover (Given) =360/9.80 = 36.73 Days Net A/R turnover=Net Credit Sales/Net Average Accounts Receivable=Net Credit Sales/[(Beginning net A/R + Ending net A/R)/2]=$8,940/[($800 + $880)/2]=$8,940/($1,680/2)Net A/R turnover=$8,940/$840 = 10.64 times Days' sales in A/R=Days in Business Year/(Net)Accounts Receivable Turnover=360 (Given)/(Net)Accounts Receivable Turnover (from 1, above)=360/10.64 = 33.83 Days Marginal Revenue: is the increase in revenue that results from the sale of one additional unit of output.

Sunk cost

Sunk cost is the cost of resources incurred in the past that cannot be changed by current or future decisions.

Market Equilibrium

Supply and demand curve (look at illustration)

written response: The operations manager of a company noticed that the number of customer returns steadily increased over the last six months. Upon further investigation, it was determined that many of the returns were caused by poor product quality. You have been engaged as a consultant to assist the company in addressing this problem. In a memo to the operations manager, discuss causes and solutions to assist the company in addressing poor product quality.

In looking at product quality focus at your company, we see the opportunity to improve total quality management to ensure long-term success through customer satisfaction. As part of this, it is important to consider the total costs of quality and where opportunities exist at your company to reach a higher quality of conformance. Quality of conformance refers to the degree to which a product meets its design specifications and customer expectations. Total quality management addresses both perspectives. When the overall quality of conformance is low, more of the total cost of quality is typically related to cost of failure. The cost of product returns falls under external failure costs. This is the most expensive of the costs of quality. The likely cause of poor product quality is a breakdown in total quality management and not investing adequate resources in the right cost of quality components. By investing in a stronger total quality management system focused on the total cost of quality, especially invested in prevention costs, you should see an eventual reduction in your external failure costs. Total quality management requires participation across the organization. It requires customer focus, empowering employees, continuous improvement, and management commitment. It also requires a focus on quality measures to be able to assess, strategically plan, and make sound decisions. A company can measure the total cost of quality as the sum of prevention, appraisal, internal failure, and external failure costs. Prevention costs are quality costs designed to ensure the job is done right the first time, for example, quality engineering and training, supervision and support for prevention activities, quality data gathering, analysis, and reporting, and quality improvement projects. Appraisal costs include testing and inspecting to check for defective products, for example, supplies inspection, in-process goods inspection, and final product inspections as well as maintenance testing. Internal failure costs are costs incurred when product issues are discovered but the product has not yet reached the customer. Examples of internal failure costs include spoilage, rework (product, labor, and overhead), retesting and analysis of the cause, and related downtime. External failure costs are costs incurred due to defects when the product has reached the customer, for example, product returns, allowances, recalls, repairs, replacements, and lost sales due to reputation for products that do not meet the customer expectations. Increasing prevention and appraisal costs is usually followed by decreasing failure costs and increase in quality of conformance. In general, the cost of prevention is less than both the cost of appraisal and the cost of failure. By investing in more prevention activities as well as appraisal activities, you too should see a decline in failure costs with a simultaneous improvement in total quality management. Please let me know if you have any additional questions or would like to continue this discussion. Thank you, Future CPA

The marginal cost of producing the ninth unit is

This answer is correct. Marginal cost is the additional cost of producing one more unit. The amount may be obtained by subtracting the total cost of 9 units from the total cost of 8 units. $25.75 = ($33.75 × 9) - ($34.75 × 8). The additional cost to produce the next unit

master budget

a presentation of an organization's operational and financial budgets that represents the firm's overall plan of action for a specified time period

COBIT (Control Objectives for Information and related Technology)

keep going you got this

Kinked demand curve

An oligopolist faces a kinked demand curve because competitors will often match price decreases but are hesitant to match price increases.

Expected value (EV)

Because it is not always possible to make decisions under conditions of total certainty, decision makers must have a method of determining the best estimate or course of action where uncertainty exists. One method is probability analysis. Probabilities are used to calculate the expected value of each action. The expected value of an action is the weighted-average of the payoffs for that action, where the weights are the probabilities of the various mutually exclusive events that may occur

net present value modeling

The net present value modeling assesses projects by comparing the present value of the expected cash flows (revenues or savings) of the project with the initial cash investment in the project. The use of present value provides for the compounding of amounts over time.

ratios

Determine Operating Cycle=Number of days' supply in inventory + Number of days' sales in accounts receivable Determine Cash Conversion Cycle=Number of days' supply in inventory + Number of days' sales in accounts receivable − Number of days' purchases in accounts payable number of days is usually 365 days or 360 days divided by the turnover rate. Remember to read and make sure you understand what the question is asking for and the wording, if says based, that means the amount uses the days left

Question example

READ AND UNDERSTAND WHAT THE QUESTION IS ASKING YOU. Example: A hospital is comparing last year's emergency rescue services expenditures to those from 10 years ago. Last year's expenditures were $100,500. Ten years ago, the expenditures were $72,800. The CPI for last year is 168.5 as compared to 121.3 ten years ago. After adjusting for inflation, what percentage change occurred in expenditures for emergency rescue services? Answer: The percentage change in expenditures for emergency rescue services is a 0.6% (i.e., .006) decrease. The change in the CPI value over the 10 years was 168.5 - 121.3 = 47.2. The percentage change in the CPI was 47.2/121.3 = .38911. Therefore, adjusted for inflation, expenditures 10 years ago would have an adjusted value of $72,800 x 1.38911 = $101,127. Since expenditures in current dollars was $100,500 there was an inflation adjusted decrease of 0.6%, calculated as $100,500 - $101,127 = ($627)/$101,127 = .0062 (or 0.6%) decrease.

accounting rate of return

The Bread Company is planning to purchase a new machine which it will depreciate on a straight-line basis over a 10-year period. A full year's depreciation will be taken in the year of acquisition. The machine is expected to produce cash flow from operations, net of income taxes, of $3,000 in each of the 10 years.; The accounting (book value) rate of return is expected to be 10% on the initial increase in required investment. The cost of the new machine will be This answer is correct.The accounting rate of return equals accounting net income over book value. The book value of the new machine would be its cost. The $3,000 cash flow net of income taxes does not reflect the 10% straight-line depreciation. The solutions approach is to set up a formula in which cost is equal to $3,000 minus depreciation (which is 10% of cost) divided by the 10% rate of return. The numerator is the expected increase in accounting income. The denominator is the capitalization rate, 10%. Solving the formula indicates that the cost of the machine is $15,000. Cost=($3,000 - .10 (cost)) / .10 .10 cost=($3,000 - .10 cost) .20 cost=$3,000 Cost=$15,000 The accounting rate of return measures the expected annual incremental accounting income from a project as a percent of the initial (or average) investment in the project. Since it uses accounting income, it takes into account depreciation expense in computing the annual incremental income.

annual cost of carrying inventory

This answer is correct. The annual cost of carrying inventory is the average inventory level times the cost per unit of inventory times the cost of capital. It is calculated as follows: Average inventory level × Unit cost × Cost of capital = (order size / 2) × $5 × 0.12 = (500 / 2) × $5 × 0.12 = $150. Total quantity for year=1500 units Order in 500 unit increments

written response: At a monthly staff planning meeting, the outlook for the national economy was a topic of discussion. During the discussion, one participant noted that gross domestic product (GDP) was expected to increase only slightly for the coming year. Another commented that a slight increase in the expected GDP did not necessarily mean an increase in business activity or output. Still another participant asked about the role of GDP per capita as it relates to planning. The partner-in-charge of the meeting asked that, prior to the next planning meeting, you prepare and distribute a brief memorandum defining and describing (1) GDP, (2) real GDP, and (3) GDP per capita, and how each may be used for analytical purposes. Type your communication below the line in the response area below.

This memorandum is to provide information about the three measures of gross domestic product identified at the last staff planning meeting. Those measures are: (1) nominal gross domestic product, (2) real gross domestic product, and (3) gross domestic product per capita. Each measure will be defined and the role of each for analytical purposes will be described. Nominal gross domestic product (GDP) measures in current prices the total output of final goods and services produced within a country during a period for exchange in the market. It does not include (1) the value of goods which require additional processing, (2) the value of activities which do not go through standard markets, or (3) goods or services produced by domestic entities outside the country. Importantly, nominal GDP does not adjust for changes in prices that occur over time. For analytical purposes, nominal GDP is useful as a measure of the level of current output, but because it is not adjusted for changing prices, it does not provide a useful measure of the level of output over time. For example, if nominal GDP increases only slightly, that increase could be due to either an increase in output or an increase in prices, or a combination of the two factors. Real GDP measures the same total output as nominal GDP, but adjusts for changes in prices using a price index. The result is GDP in terms of prices that existed at a specific prior base period, a measure also referred to as GDP at constant prices. Therefore, any change in real GDP is solely the result of a change in output during the period. Real GDP is particularly useful in analyzing changes in domestic output over time. GDP per capita measures GDP per individual within the country. While it can be calculated for nominal GDP, it is most commonly calculated for real GDP. The calculation would be accomplished by dividing real GDP by the population of the country. Real GDP per capita is a measure of the standard of living in a country and is useful in making comparisons of the standard of living among countries. Please let me know if you need additional information about these measures of economic output. I.M. Candidate

demand

Which one of the following would not cause an increase in demand for a commodity? A reduction in price will not cause an increase in demand for a commodity, but rather will change (increase) the quantity demanded. An increase in demand causes a shift of the demand curve (up and to the right). A change in price causes movement along a specific demand curve.

Required(minimum)rate

RFR = Risk-free rate (3%) B = Beta (1.20) ERR = Expected rate of return for benchmark (entire asset class) (12%) Using the values provided in the facts, the calculation is:Required(minimum)rate=.03+[1.20(.12−.03)]. Required(minimum)rate=.03+[1.20(.09)] Required(minimum)rate=.03+.108=.138 The 1.20 beta indicates that the firm under consideration has a greater volatility (risk) than that of the entire asset class of which it is a part. As a consequence, the return required by that firm in order to compensate for the higher risk (13.8%) is greater than the return of the entire asset class (12%).

Data Governance and Data Management

The Business and Its Data A. Because of the data analytics revolution, data management and governance have emerged as key enablers of business success. The goal of these efforts is to create or enable turning a data lake (i.e., an unfiltered pool of big data) into a data warehouse (i.e., a structured, filtered data repository for solving business problems). Stage 1—Establish a Data Governance Foundation A. Creating a foundation for data governance enables addressing the legal, intellectual property, and customer privacy and security issues that arise with data use and storage. ZA management expects the following benefits from establishing a foundation for data governance: 1. Define and manage data as an asset. 2. Define data ownership, stewardship and custodianship roles and responsibilities. 3. Implement data governance. B. Data governance will be designed to answer the following questions: 1. WHAT data does ZA have, need, and use? 2. WHAT data governance practices exist in the ZA data life cycle? 3. WHO is responsible for ZA's data governance? 4. HOW will data be managed in ZA? C. WHAT—Data Classification and Data Taxonomy 1. Data classification defines the privacy and security properties of data. For example, Figure shows (on the horizontal axis) data classifications of public, internal, confidential, and sensitive. Complete data classification would also consider applicable laws and regulations (e.g., Health Insurance Portability and Accountability Act [HIPPA] requirements, and the General Data Protection Regulation [GDPR]—the European Union's general privacy and security law). 2. The data taxonomy categorizes the data within the organization's structure and hierarchy. D. WHEN—The Data Life Cycle—Mapping Data Governance Activities 1. The data life cycle overviews the steps in managing and preserving data for use and reuse. The figure below summarizes this model, which is like the system development life cycle in the "System Development and Implementation" lesson. By standardizing their use of the data life cycle model, organizations increase the likelihood that data will be usable and long-lived. E. WHO—The Data Governance Structure and Data Stewardship ZA's growth through acquisitions has created inconsistent business processes and multiple, unconnected datasets. Because of this, ZA created an oversight data governance organizational structure to direct, evaluate, and monitor data governance issues. This organizational structure helps ensure that data assets are complete, accurate, and in compliance with internal policies and external regulations. This governance structure consists of a set of data governance committees. F. As a part of this governance structure, ZA defined three key data roles: 1. Data owner—A senior-level, strategic oversight role a. Responsible for major data decisions and the overall value, risk, quality and utility of data. 2. Data steward—A tactical role a. Ensures that data assets are used and compliant. Facilitates consensus about data definitions, quality, and use. b. May be an individual or a group. 3. Data custodian—An IT operational role a. Ensures that data-related IT controls are implemented and operating. b. Implements IT capabilities and manages the IT architecture. c. May be an individual or a group. 4. The next figure is a RACI chart that illustrates the data stewardship roles of the data owner, steward, and custodian across the data life cycle. a. RACI is an acronym that stands for: i. Responsible—Does the work to complete the task. ii. Accountable—Delegates the work and is the last one to review the task or deliverable before completion. iii. Consulted—Deliverables are strengthened by review and consultation from multiple team members. iv. Informed—Informed of project progress. G. HOW—Data Governance Policies and Standards 1. Assessing an organization's data-related risks is important to developing effective data governance. (See the Enterprise Risk Management Frameworks module for more about assessing risk.) This assessment will include identification of data-relevant laws and regulations with which the organization must comply. Stage 2—Establish and Evolve the Data Architecture A. Stage 2 discusses the data standardization that must occur to facilitate the data architecture.Data architecture describes "the structure and interaction of the major types and sources of data, logical data assets, physical data assets and data management resources of the enterprise" (ISACA 2020). 1. A logical data asset model shows the data at the level of business requirements. For example, in an accounts receivable system, a logical data model would show the entities (e.g., customers, products, sales prices, sales orders, sales transactions) and their relationships (e.g., customers can have multiple sales orders; each product has only one sales price). 2. A physical data asset model shows how the data are stored in the organization's accounting system.For example, the sales price field is stored in the Sales Lookup Table in US $ as a real number with eight digits including two decimal places. C. Data Standardization Requirements 1. Harvested data is often messy or polluted. Cleaning this data requires an ETL. (See the "Data Analytics" lesson for a description of data cleaning.) 2. This process has two goals: a. To clean and standardize data for use and reuse b. To standardize the data management process to achieve greater efficiency and data quality D. Data Models to Be Standardized 1. A typical process of standardizing data models recognizes three levels of data: a. Conceptual—A high-level, abstract, enterprise-wide view b. Logical—A level that adds details to the conceptual level to more completely describe the business requirements for the data c. Physical—The level that specifies how data will be encoded and stored in a database (e.g., SQL and NoSQL) and considers issues of processing speed, accessibility, and distribution (e.g., cloud versus local storage) E. Establish and Standardize Metadata and Master Data 1. ZA's data governance is complicated by having datasets that were created in multiple predecessor companies that were acquired by ZA. Because of this, ZA must engage in data mapping, or converting data from multiple previous systems into a standardized data map that will be used for the enterprise-wide data architecture. 2. The data map specifies how the old data set will be converted to the new, standardized, enterprise-wide data structure. 3. Master data is the core data that uniquely identify entities such as customers, suppliers, employees, products, and services. Master data is stable; it changes infrequently. 4. Metadata is described in the "Big Data" lesson as "data about data." F. Publish and Apply the Data Standards 1. The enterprise-wide data standards are encoded in the data dictionary, which "is a central repository where detailed data definitions can be found as the single source of trust" (ISACA 2020). VI. Stage 3—Define, Execute, Assure Data Quality and Clean Polluted Data A. Good Metadata Strategy Leads to Good Data Quality 1. After creating standards for data classification and taxonomy, the organization can create a metadata strategy that ensures high-quality, reusable data. 2. The metadata strategy is often best focused on the organization's data lake or data warehouse since this is where most of the shared data resides. B. Define Data Quality Criteria—Three General Categories 1. Next, the organization must specify which attributes of data quality matter and why. The ISACA COBIT models (5 and 2019) include three broad categories of information quality: a. Intrinsic—The extent to which data values conform with actual or true values b. Contextual—The extent to which information is relevant and understandable to the task for which is it collected c. Security/Accessibility—Controls over information availability and accessibility C. Execute Data Quality 1. Governing ongoing data quality is a joint project of business units and IT. IT manages the technical environment. Business units establish the rules and are ultimately responsible for data quality. D. Regular Data Quality Assessment 1. Ongoing and periodic assessments of data quality are an application of the principles found in the COSO Internal Control Monitoring Purpose and Terminology and Internal Control Monitoring and Change Control Processes lessons. VII. Stage 4—Realize Data Democratization A. What is data democratization? In the ZA data governance example, much of the process of identifying and standardizing databases and data management processes occurs in committees that include IT and the business units. However, we clean and standardize data in order to make it available to users. Data democratization is the process of creating a single-source, searchable, curated database that is shared across the organization. B. Security and privacy are fundamental to data democratization. Obviously, views of the data are managed to limit access to data subsets, as appropriate to a user's role and associated permissions. VIII. Stage 5—Focus on Data Analytics A. The primary purpose of data governance is to enable data analytics, which is discussed in the Data Analytics module. Correct. The primary and foreign keys that are used in the specific database in which the data model is implemented are properties of the physical data model. The physical data asset model shows how the data are stored in the organization's accounting system.

The COBIT Model of IT Governance and Management

The Control Objectives for Information and Related Technology (COBIT) Framework A. Introduction 1. Although there are many available models for IT governance, COBIT is a widely used international standard for identifying best practices in IT security and control. COBIT provides management with an information technology (IT) governance model that helps in delivering value from IT processes, and in understanding and managing the risks associated with IT. In addition, COBIT provides a framework that helps align IT with organizational governance. 2. COBIT bridges the gaps between strategic business requirements, accounting control needs, and the delivery of supporting IT. COBIT facilitates IT governance and helps ensure the integrity of information and information systems. 3. COBIT is consistent with, and complements, the control definitions and processes articulated in the COSO and COSO ERM models. The most important differences between COSO and COSO ERM and COBIT are their intended audiences and scope. The COSO and COSO ERM models provide a common internal control language for use by management, boards of directors, and internal and external auditors. In contrast, COBIT focuses on IT controls and is intended for use by IT managers, IT professionals, and internal and external auditors. 4. The COBIT framework is organized around the following components: a. Domains and processes—The IT function is divided into four domains within which 34 basic IT processes reside: i. Planning and organization—How can IT best contribute to business objectives? Establish a strategic vision for IT. Develop tactics to plan, communicate, and realize the strategic vision. ii. Acquisition and implementation—How can we acquire, implement, or develop IT solutions that address business objectives and integrate with critical business process? iii. Delivery and support—How can we best deliver required IT services including operations, security, and training? iv. Monitoring—How can we best periodically assess IT quality and compliance with control requirements? Monitoring IT processes are identified as particularly relevant for the CPA Exam. The COBIT model identifies four interrelated monitoring processes: 1. M1. Monitor and evaluate IT performance—Establish a monitoring approach, including metrics, a reporting process, and a means to identify and correct deficiencies. 2. M2. Monitor and evaluate internal control—This is required by the Sarbanes-Oxley Act (SOX) Section 404. 3. M3. Ensure regulatory compliance—Identify compliance requirements and evaluate, and report on, the extent of compliance with these requirements. 4. M4. Provide IT guidance—Establish an IT governance framework that aligns with the organization's strategy and value delivery program. b. Effective IT performance management requires a monitoring process. This process includes the following: i. Information criteria—To have value to the organization, data must have the following properties or attributes: 1. Effectiveness 2. Efficiency 3. Confidentiality 4. Integrity 5. Availability 6. Compliance 7. Reliability ii. IT resources—Identify the physical resources that comprise the IT system: 1. People 2. Applications 3. Technology 4. Facilities 5. Data c. More than 300 generic COBIT control objectives are associated with the 34 basic IT processes identified in COBIT. The COBIT model, the components mentioned above, and the 34 basic IT processes are summarized in the following figure: Look at figure d. Within the figure, items M1 to M4 are the processes related to monitoring, items PO1 to PO11 are the processes related to planning and organization, and so on. www.youtube.com/watch?v=bg_GEN8AZA0

Written response: Management is considering the decision of processing a current product further and selling it for a higher selling price. There is market demand for both the current product and the proposed further processed product. An operations manager (Jim) suggests that the amount of joint costs should be considered in making the decision. Explain to Jim in an E-mail why joint costs do not need to be considered in making this decision.

Joint costs are the result of a single manufacturing process that yields multiple products. Two or more products of significant sales value are said to be joint products when they are produced from the same set of raw materials and are not separately identifiable until a split-off point. Considering the amount of joint costs incurred is never relevant to the decision to sell or further process a product. This is because the joint costs are sunk costs. This means that the costs have already been incurred - an event that cannot be changed. The definition of a relevant cost is a future cost that differs between decision alternatives. Joint costs are always incurred in the past and will not change whether the resulting decision is to sell a product as-is or to process the product further. Although joint costs are important, once they are incurred, they will never have a bearing on whether to sell as-is or process further from that point. If you have further questions, please reach out to me to discuss. Thank you, Future CPA

Fraud Risk Management

Principle 1—Control Environment 1. The organization establishes and communicates a fraud risk management program that demonstrates the expectations of the board of directors and senior management and their commitment to high integrity and ethical values regarding managing fraud risk. 2. Focal points include: a. Map fraud risk program to organizations' goals and risks. b. Establish fraud risk governance roles and responsibilities throughout the organization. c. Document program and communicate throughout organization. Principle 2—Risk Assessment 1. The organization performs comprehensive fraud risk assessments to identify specific fraud schemes and risks, assess their likelihood and significance, evaluate existing fraud control activities, and implement actions to mitigate residual fraud risks. 2. Focal points for the risk assessment include: a. Managing the risk assessment process: i. Involve appropriate management, including all organizational management levels and functions. ii. Use data analytics to assess risks and evaluate responses. iii. Periodically reassess fraud risk. iv. Document risk assessment. b. Risk assessments: i. Analyze internal (i.e., types of activities) and external (customers, vendors, environment) risks. ii. Consider risks of distinct types of fraud (see earlier four categories of fraud). iii. Consider the risk of management override of controls. iv. Estimate the likelihood and significance of identified risks. v. Assess personnel and departments in relation to the fraud triangle (opportunity, incentives and pressure, attitudes or rationalizations). c. Fraud controls and their effectiveness: i. Identify existing fraud controls and their effectiveness. ii. Determine risk responses. Principle 3—Control Activities 1. The organization selects, develops, and deploys preventive and detective fraud control activities to mitigate the risk of fraud events occurring or not being detected in a timely manner. a. Focal points: i. Promote fraud deterrence through preventive and detective controls. ii. Control activities should consider (a) Organization- and industry-specific factors. (b) Applying controls to differing organizational levels. (c) Risk of management override of controls. (d) Integration with fraud risk assessments. (e) Multiple, synergistic fraud control activities (e.g., a defense-in-depth strategy). (f) Proactive data analytics procedures, such as identification of anomalous transactions. (g) Control through policies and procedures. Principle 4—Information and Communication 1. The organization establishes a communication process to obtain information about potential fraud and deploys a coordinated approach to investigation and corrective action to address fraud. 2. Focal points: a. Establish fraud investigation and response protocols. b. Conduct and document investigations. c. Communicate investigation results. d. Implement corrective actions. e. Evaluate investigation performance. Principle 5—Monitoring Activities 1. The organization selects, develops, and performs ongoing evaluations to ascertain presence of the five principles of fraud risk management and functioning and communicates fraud risk management program deficiencies in a timely manner to parties responsible for taking corrective action, including senior management and the board of directors. 2. Focal points: a. Consider: i. Ongoing and separate evaluation. ii. Influences on scope and frequency of monitoring (e.g., changing fraud risks, personnel changes). iii. Known and emerging fraud cases. b. Establish appropriate management criteria. c. Evaluate, communicate, and remediate deficiencies identified through monitoring. Managing fraud risk through HR procedures—Many HR procedures help manage fraud risk including: a. Background, credit, and criminal checks (where allowed by law)—of employees, suppliers, and business partners b. Fraud risk management training—to identify and manage entity and industry-specific risks (e.g., in financial services) c. Evaluating performance and compensation programs—e.g., do bonus programs incentivize fraud risks by offering large, short-term bonuses for sales or earnings targets? d. Annual employee surveys—including assessments of ethical tone, observed misconduct, knowledge of how to report concerns or misbehavior e. Exit interviews—including discussion of possible fraud and misconduct in the organization f. Segregation of duties discussed in the "Fraud Risk Management" lesson g. Transaction-level controls discussed in the "Logical and Physical Access Controls" module—e.g., data entry tests authorization approvals) h. Implementation of a whistleblower system—mandated for SEC registrations by SOX Act of 2002 Steps Data Analytics Tools to Support Fraud Risk Management 1. Data stratification—Sort or categorize data, including payments, journal entries, surveys, or employee data. 2. Risk scoring—Weight, aggregate, and compare fraud risk factors. 3. Data visualization—Detect changes and trends, e.g., a fraud risk assessment heat map, a fraud dashboard. 4. Trend analysis—Analyze data over time and across locations (e.g., ratio analysis over time or across locations). 5. Fluctuation analysis—Detect anomalies (e.g., unusual transactions, missing but expected transactions). 6. Statistical analysis and predictive modeling—Often used with continuous auditing and monitoring systems. 7. Integrating external data sources—e.g., emerging fraud risks, industry trends, regulatory actions, economic indicators (e.g., the Consumer Price Index).

Written Response: You are a finance manager at an investment firm. The CEO has asked you to help train newly hired staff by explaining the difference between leading, lagging, and coincident economic indicators. Prepare a memo to the new employees discussing the different kinds of indicators, and provide examples of each type of indicator.

TO: New employees RE: Leading, lagging, and coincident economic indicators This memo provides information concerning the different kinds of economic indicators and examples of each type. Economic indicators provide information about the relationships between different types of economic activity. These indicators enable analysis of past economic performance and the making of predictions about future economic performance. One of the most common applications of economic indicators is in the study of business cycles, which are the cumulative fluctuations (up and down) in aggregate real gross domestic product that generally last for two or more years. Economic indicators used in the study of the business cycle are grouped into three categories according to their usual timing in relation to the business cycle. Those categories are leading indicators, lagging indicators, and coincident indicators. The most closely followed and reliable of indicators in these categories are provided by the Federal Reserve System, the Conference Board, and the Bureau of Labor Statistics. Each category will be described and examples of each category will be provided. Leading economic indicators are measures of economic activity that generally occur before a change in the business cycle. These indicators are useful as short-term predictors of aggregate economic activity. Leading indicators include such measures as consumer expectations, initial claims for unemployment, weekly manufacturing hours, stock prices, building permits, new orders for consumer goods and for manufactured capital goods, and the level of the real money supply. Measures of economic activity associated with changes in the business cycle, but that occur after changes in the business cycle, are called lagging or trailing indicators. These lagging indicators are used primarily to confirm elements of business cycle timing and magnitude. Lagging indicators include changes in labor cost per unit of output, the ratio of inventories to sales, the duration of unemployment, the dollar value of commercial loans outstanding, and the ratio of consumer installment credit to personal income. Coincident indicators are measures of economic activity that change at approximately the same time as the economy as a whole and provide information about the current state of the economy. These indicators also may be used to help retrospectively identify the timing of peaks and troughs of the business cycle. Coincident indicators include the number of employees on nonagricultural payrolls, level of industrial production, the current unemployment rate, and the level of retail sales. I hope this memo provides a basic understanding of the types of business cycle economic indicators and the differences among them.

Which of the following most likely represents a significant deficiency in the internal control?

The systems programmer designs systems for computerized applications and maintains output controls. This answer is correct. The systems programmer should not maintain custody of output in a computerized system. At a minimum, the programming, operating, and library functions should be segregated in such computer systems.

internal rate of return

This answer is correct. The formula for determining the internal rate of return (IRR) is: Annual cash inflow (or Savings) × PV factor = Investment cost, or rearranged, PV factor = Investment cost / Annual cash inflow (or Savings) Once that PV factor is determined, the related interest (discount) rate for the time period of the project is the IRR. The lower the PV factor, the higher the IRR or, conversely, the higher the PV factor, the lower the IRR. So, to decrease the IRR (the question requirement) you would select the factor that increases the PV factor, which would involve either increasing the investment cost (numerator) or decreasing the annual cost inflow (denominator). Decreasing tax credits on the asset would increase the cost of the investment; you don't get the benefit of the higher tax credit so the net cost is higher. Since the cost of the investment is the numerator, dividing the same denominator into that higher amount would increase the PV factor and decrease the IRR. The internal rate of return method (IRR - also called the time adjusted rate of return) evaluates a project by determining the discount rate that equates the present value of the project's future cash inflows with the present value of the project's cash outflows. The rate so determined is the rate of return earned by the project. The IRR uses both present value and cash flows. IRR does have limitations when evaluating mutually exclusive investments.

Written response: A newly formed corporation, Friendly Factory, has brought you in as a consultant while it works to raise capital. Knowing the importance of the corporation's cost of capital, you are advising management under the following assumptions: Friendly Factory is looking to raise $2,000,000 in capital to acquire a building and machinery in order to conduct business. The first $1,200,000 will be raised through the issuance of stock (12,000 shares at $100). The shareholders' expected return is 6%. The other $800,000 will be raised through debt financing. Friendly Factory will sell 800 bonds for $1,000 each. The expected return is 5%. Tax rate = 35%. Write a memo to management, describing the importance of a company's weighted average cost of capital and analyzing Friendly Factory's weighted average cost of capital, given this information. Also, describe different factors that may lower (or raise) the cost of capital.

This memorandum will describe the importance of the weighted average cost of capital in determining a firm's financing. We will further analyze the weighted average cost of capital for the sources of capital Friendly Factory intends to use. Finally, we will describe how different factors may affect the cost of capital. A firm finances it projects and operations through the use of various forms of capital, including long-term debt, preferred stock, and common stock. Each of these sources of capital has a cost associated with its use. The weighted average of those costs determines the weighted average cost of capital for a firm. Because each source has a cost associated with its use, a firm should seek to attain the mix of debt and equity financing that will result in the lowest weighted average cost of capital and, thereby, contribute to optimizing the firm's common stock price. Since the weighted average cost of capital is also the average return required by investors in the firm, it establishes the minimum average return that a firm must earn on the projects it undertakes. Therefore, the weighted average cost of capital is used as the discount rate in determining whether or not new projects are economically feasible. If a project is expected to earn less than the weighted average cost of capital it would not provide the rate of return required by investors and, therefore, would not be considered economically feasible. The weighted average cost of capital for a firm is computed by first determining the percentage of total capital represented by each form of capital used. The percentage each form of capital represents is then multiplied by its after-tax cost to get a weighted cost for each form of capital. Those weighted costs are then summed to get the weighted average cost of capital for the firm. Friendly Factory intends to finance a total of $2,000,000 using $1,200,000 of equity funding at a cost of 6% and $800,000 of debt funding at a before-tax cost of 5%. The weighted average cost of capital would be computed as follows: (Equity) $1,200,000/$2,000,000 = .60 x .06 = 3.6% weighted cost of stock and (Debt) $800,000/$2,000,000 = .40 x .05 = 2.0% pre-tax cost of debt. Since interest on debt is tax-deductible, an amount equal to the taxes saved (called a tax shield) would reduce the cost of debt. The amount of taxes saved would be .02 pre-tax cost x .35 tax rate = .007. The after-tax cost of debt would be .02 - .007 = 1.3%. (That amount also could be determined as .05 x (1 - .35) = .05 x .65 = 3.25% x 40% = 1.3%). Therefore, the weighted average cost of capital for Friendly Factory will be 3.6% for equity plus 1.3% for debt, for a total weighted average of 4.9%. Both general economic factors and firm specific factors may affect the cost of capital. At the macroeconomic level an increase in the general level of interest or in the level of inflation would tend to increase the before-tax cost of debt. An increase in the general level of taxes would increase the tax shield provided by debt and, other things being equal, would reduce the effective cost of debt. Decreases in those factors would be expected to have the opposite effect. At the firm level, increasing the proportion of debt relative to equity would serve to reduce the weighted average cost of capital. However, at some level of debt, the perceived risk associated with new debt will cause the cost of debt to increase, thus increasing the cost of capital. In addition, the volatility of the firm's earnings would affect the perceived risk of investing in the firm and, therefore, increase the cost of both debt and equity funding. I hope this memorandum adequately addresses the cost of capital issue for Friendly Factory. Please let me know if I can provide additional information or be of other help. Sincerely, Future CPA

Total Utility

Total utility increases as quantity increases. The curve C-C correctly depicts a variable (total utility) on the Y axis that increases as the variable (quantity) on the X axis increases. Total utility increases as more product is purchased

In ERM, ______ focuses on the development of strategy and goals while _____ focuses on the implementation of strategy and variation from plans.

risk appetite; tolerance Risk appetite is the amount of risk an organization accepts in pursuit of a strategy and value. Risk appetite is focused on strategy and goals. Tolerance sets the boundaries of acceptable performance; it is related to strategy implementation and variation from plans.

Unemployment

1. Cyclical unemployment- results from a contraction in economic activity (i.e., a decrease in aggregate demand). During a period of economic contraction, the cyclical unemployment rate would be expected to increase. 2. structural unemployment rate- results from the ongoing elimination of certain types of jobs due to technological and related changes. Therefore, an increase in the structural unemployment rate would not be most likely to be associated with a period of economic contraction. An increase in cyclical unemployment is most likely to be associated with a period of economic contraction. 3. natural rate of unemployment- consists of (i.e., is the sum of) the frictional, seasonal, and structural rates of unemployment, but does not include the cyclical rate of unemployment. Therefore, an increase in the natural rate of unemployment (due to frictional, seasonal, and/or structural reasons) is not likely to be associated with a period of economic contraction. An increase in cyclical unemployment is most likely to be associated with a period of economic contraction.

Activity-Based Costing and Process Management

A. Activity-based costing (ABC) is a method for determining the cost of an item or other cost object. The methodology focuses on understanding and using the work steps, or activities, that are involved in completing the item or service. ABC is based on two principles: (a) Activities consume resources; and (b) These resources are consumed by products, services, or other cost objectives (output). ABC allocates costs to products on the basis of the resources consumed by each activity involved in the design, production, and distribution of a particular good. ABC is used only for allocating costs that cannot be directed traced to a product or service; as such, ABC doesn't affect direct material costs or direct labor costs. ABC is applicable for allocating costs that are shared by many products, services, or other cost objects; these costs are often called indirect or overhead costs if they are product-related. ABC can also be used to allocate shared period costs; examples of these include customer service, promotion, record keeping, and shipping costs. B. Accountants and business managers often express cost as "cost per something," such as "cost per unit," "cost per ton," or "cost per gallon." Cost allocation uses a ratio with cost dollars in the numerator and the cost driver as the denominator. Ideally, the cost driver is the object or item that causes cost. When reviewing labor-related costs, such as uniform rental, the cost driver is typically direct labor hours, because uniform rental costs would increase and decrease in proportion with direct labor hours. Because direct labor hours data are collected, verified, and retained on a regular basis, those data often are used as the cost driver or denominator for the allocation of most factory overhead costs. We often see estimated overhead dollars divided by estimated direct labor dollars used as the basis for predetermined overhead cost in a "normal" product costing situation. Unfortunately, not all overhead costs change proportionately with direct labor hours. ABC can be used to separate the overhead cost pool into smaller cost pools, each with a more appropriate cost driver. Definitions Activities: Procedures that comprise work. Cost drivers: Measures that are closely correlated with the way an activity accumulates costs; i.e., the cost driver for production line setup costs might be the number of machines that have to be set up; cost drivers are the basis by which costs are assigned to products (direct labor hours, machine hours, occupancy percentages, etc.). Cost pools: A group or collection of costs that have something in common. Value-added activities: Processes that contribute to the product's ultimate value; includes items such as design and packaging in addition to direct conversion of direct materials into finished goods. The definition of value-added activity always includes a reference to the customer. An activity adds value to an organization if the activity adds value for the customer. Customers are willing to pay for activities that add value for them. Non value-added activities: Processes that do not contribute to the product's value; includes items such as moving materials and more obvious activities such as rework; cost reductions can be obtained by reducing or eliminating nonvalue-added activities. Assigning Costs Using an ABC System—The first step in an ABC project is to determine the cost object: the thing or collection of things for which a cost is desired. Cost objects can be products, product lines, geographic units, customers, or some other dimension that is important to the business. ABC is most commonly used when the cost objects are not homogenous. For example, a restaurant may want to understand cost per customer." If the restaurant has different types of customers, such as eat-in customers, take-out customers, and drive-through customers, ABC might be a helpful tool. Steps Used to Assign Costs in an ABC System Let's review the steps which are involved in developing an ABC system: 1. Identify the relevant cost object (usually a single part or product). 2. Identify activities and group homogeneous activities. 3. Assign costs to the activity cost pools. 4. Choose a cost driver for each activity cost pool. (Select the best one available.) 5. Calculate an allocation rate for each activity cost pool. 6. Allocate activity costs to the final cost object. Steps 1 to 4 are difficult and may result in ABC systems being abandoned or short-lived. Steps 5 and 6 are the same steps used in any cost allocation system: Once we have the dollars in the numerator and some allocation basis in the denominator, we simply do the math and calculate a rate. We apply the rate to the final cost objects by multiplying the rate times the number of units of the allocation basis found in the cost object. Activity-Based Costing Characteristics A. ABC begins by identifying activities. Activities form the building blocks of an ABC system because activities consume resources. Activities are commonly grouped into one of four categories, which are often referred to as cost hierarchies. Some organizations use a cost hierarchy to help identify activities. The four categories are: 1. Unit-level activities—Activities that must be performed for every product unit; for example, using a machine to polish a silver tray or boxing up an item for delivery. 2. Batch-level activities—Activities that must be performed for each batch of products produced; examples include setting up the production equipment for the batch and running quality inspections of items in the batch. 3. Product-sustaining-level activities—Activities that are necessary to support the product line as a whole, such as: advertising and engineering activities. 4. Facility- (general operations‒) level activities— Tasks or functions that generally involve a physical space, such as a building, a group of buildings, or a large group of plant, property, and equipment. The costs associated with facility-sustaining activities are often fixed in nature and generally have little relationship to the production or storage of product within the facility. Facility-sustaining costs may be allocated to the products that are produced or stored or sold within the facility to provide the organization with a full cost view of those products. Effects of Adoption of Activity-Based Costing A. Because of the way ABC identifies and allocates costs, organizations that adopt ABC tend to have: 1. More accurate measures of cost 2. More cost pools 3. More allocation bases (e.g., multiple causes for costs to occur) B. ABC can be used: 1. With job order and process costing systems 2. With standard costing and variance analysis 3. For service businesses as well as manufacturers C. In general, compared to traditional, volume-based costing, ABC tends to shift costs away from high-volume, simple products to lower-volume, complex products. Exam Tip The items listed here in "Effects of Adoption of Activity-Based Costing" are likely to be the most heavily tested concepts in activity-based costing. Volume-Based Calculation Example A. YRU Overhead, Inc. currently uses a plant-wide, volume-based approach to manufacturing overhead (OH) allocation. However, they are considering switching to activity-based costing (ABC) for purposes of overhead allocation to increase the cost accuracy. They would like to see a comparison of the two approaches. These are shown below: B. Volume-Based Allocation 1. For the current volume-based approach, YRU uses a predetermined factory overhead rate based on direct labor hours (DLH). For the current year, YRU budgeted overhead of $600,000 based on a volume of 100,000 DLH. Actual overhead amounted to $650,000 with actual DLH totaling 110,000. Given this information, YRU would allocate factory overhead as follows: a. Step 1—The predetermined rate is based on budgeted OH divided by budgeted DLH. The allocation rate would be calculated as $600,000 / 100,000 DLH = $6 per DLH. This happens at the beginning of the year. b. Step 2—During the year, and using the predetermined rate of $6 per DLH, overhead is allocated to Work-In-Process by multiplying the rate by the actual units of the allocation base: $6 × 110,000 = $660,000.

System Development and Implementation

Sourcing Decisions—In some cases, an organization's IT strategy will indicate that it intends to build (i.e., develop) or buy software within certain domains or units. If such a strategy exists, it will indicate a preference that business applications development is insourced (i.e., an internal development process) or outsourced (i.e., purchased from someone else). The systems development life cycle (SDLC) framework (though not each step) applies to either insourced or outsourced development processes. Developing Business Applications—The importance, and potential negative consequences, of systems development is evident in the many large-scale systems failures that have cost organizations millions of dollars (e.g., the Denver airport baggage system, ERP at Hershey's, the Bank of America Trust Department). Developing a functioning computer system, on time, and on budget requires communication and coordination among multiple groups of people with very different points of view and priorities. Without a clear plan for defining, developing, testing, and implementing the system, it is perilously easy to end up with a system that fails to meet its objectives and must be scrapped. The systems development life cycle is designed to provide this plan. Purpose of the Systems Development Life Cycle (SDLC) Method—The systems development life cycle provides a structured approach to the systems development process by: A. Identifying the key roles in the development process and defining their responsibilities B. Establishing a set of critical activities to measure progress toward the desired result C. Requiring project review and approval at critical points throughout the development process. Before moving forward to each stage of the SDLC, formal approval for the previous stage should occur and be documented. Roles in the SDLC Method—Each party to the development process must review the system and sign off, as appropriate, at stages of development. This helps to ensure that the system will perform as intended and be accepted by the end users. Principal roles in the SDLC include the following: A. IT Steering Committee—Members of the committee are selected from functional areas across the organization, including the IT Department; the committee's principal duty is to approve and prioritize systems proposals for development. B. Lead Systems Analyst—The manager of the programming team: 1. Usually responsible for all direct contact with the end user 2. Often responsible for developing the overall programming logic and functionality C. Application Programmers—The team of programmers who, under direction of the lead analyst, are responsible for writing and testing the program. D. End Users—The employees who will use the program to accomplish their work tasks using the developed system: 1. Responsible for identifying the problem to be addressed and approving the proposed solution to the problem, often also work closely with programmers during the development process Stages in, and Risks to, the SDLC Method—Riskier systems development projects use newer technologies or have a poorly defined (i.e., sketchy) design structure. In the SDLC method, program development proceeds through an orderly series of steps. At the end of each step, all of the involved parties (typically the lead systems analyst, the end user, and a representative from the IT administration or the IT steering committee) sign a report of activities completed in that step to indicate their review and approval. The seven steps in the SDLC method are: Stage 1—Planning and Feasibility Study—When an application proposal is submitted for consideration, the proposal is evaluated in terms of three aspects: 1. Technical feasibility—Is it possible to implement a successful solution given the limits currently faced by the IT department? Alternatively, can we hire someone, given our budget, to build the system? 2. Economic feasibility—Even if the application can be developed, should it be developed? Are the potential benefits greater than the anticipated cost? 3. Operational feasibility—Given the status of other systems and people within the organization, how well will the proposed system work?After establishing feasibility, a project plan is developed; the project plan establishes: a. Critical success factors—The things that the project must complete in order to succeed. b. Project scope—A high-level view of what the project will accomplish. c. Project milestones and responsibilities—The major steps in the process, the timing of those steps, and identification of the individuals responsible for each step. Stage 2—Analysis—During this phase, the systems analysts work with end users to understand the business process and document the requirements of the system; the collaboration of IT personnel and end users to define the system is known as joint application development (JAD). 1. Requirements definition—The requirements definition formally identifies the tasks and performance goals that the system must accomplish; this definition serves as the framework for system design and development. a. All parties sign off on the requirements definition to signify their agreement with the project's goals and processes. Stage 3—Design—During the design phase, the technical specifications of the system are established; the design specification has three primary components: 1. Conceptual design—The first step of design process, called conceptual design, is to summarize the goal, structure, data flows, resource requirements, systems documentation, and preliminary design of the system. 2. Technical architecture specification—Identifies the hardware, systems software, and networking technology on which the system will run. 3. Systems model—Uses graphical models (flowcharts, etc.) to describe the interaction of systems processes and components; defines the interface between the user and the system by creating menu and screen formats for the entire system. Stage 4—Development—During this phase, programmers use the systems design specifications to develop the program and data files: 1. The hardware and IT infrastructure identified during the design phase are purchased during the development phase. 2. The development process must be carefully monitored to ensure compatibility among all systems components as correcting of errors becomes much costlier after this phase. Stage 5—Testing—The system is evaluated to determine whether it meets the specifications identified in the requirements definition. 1. Testing procedures must project expected results and compare actual results with expectations: a. Test items should confirm correct handling of correct data, and, data that includes errors. 2. Testing most be performed at multiple levels to ensure correct intra- and inter-system operation: a. Individual processing unit—Provides assurance that each piece of the system works properly. b. System testing—Provides assurance that all of the system modules work together. c. Inter-system testing—Provides assurance that the system interfaces correctly with related systems. d. User acceptance testing—Provides assurance that the system can accomplish its stated objectives with the business environment, and that users will use the delivered system. Stage 6—Implementation—Before the new system is moved into production, existing data must be converted to the new system format, and users must be trained on the new system; implementation of the new system may occur in one of four ways: 1. Parallel implementation—The new system and the old system are run concurrently until it is clear that the new system is working properly. 2. Direct cutover, "cold turkey," "plunge," or "big bang" implementation—The old system is dropped and the new system put in place all at once. This is risky but fast (except when it fails —in which case it is slower). 3. Phased implementation—Instead of implementing the complete system across the entire organization, the system is divided into modules that are brought on line one or two at a time. 4. Pilot implementation—Similar to phased implementation except, rather than dividing the system into modules, the users are divided into smaller groups and are trained on the new system one group at a time: Stage 7—Maintenance—Monitoring the system to ensure that it is working properly and updating the programs and/or procedures to reflect changing needs: 1. User support groups and help desks—Provide forums for maintaining the system at high performance levels and for identifying problems and the need for changes. 2. All updates and additions to the system should be subject to the same structured development process as the original program. Systems Development Failures—A recent survey indicates that companies complete about 37% of large IT projects on time and only 42% on budget. Why do systems projects so often fail? Common reasons include: 1. Lack of senior management knowledge of, and support and involvement in, major IT projects 2. Difficulty in specifying the requirements 3. Emerging technologies (hardware and software) that may not work as the vendor claims 4. Lack of standardized project management and standardized methodologies 5. Resistance to change; lack of proper "change management." Change management is integral to training and user acceptance 6. Scope and project creep. The size of the project is underestimated and grows as users ask "Can it do this?" 7. Lack of user participation and support 8. Inadequate testing and training. Training should be just-in-time (prior to use) and be at full-load service levels. 9. Poor project management—underestimation of time, resources, scope Accountant's Involvement in IS Development A. Accounting and auditing skills are useful in cost/benefit and life cycle cost analyses of IT projects. B. Possess combined knowledge of IT, general business, accounting, and internal control along with communication skills to ensure new system(s) meet the needs of the users C. Types of accountants who may participate in system development: system specialist, consultant, staff accountant, internal or independent auditor Alternative System Development Processes A. Smaller, more innovative projects may use more rapid iteration processes, such as: 1. Prototyping—An iterative development process focusing on user requirements and implementing portions of the proposed system. Prototypes are nothing more than "screenshots" that evolve, through iterations, into a functioning system. 2. Rapid application development (RAD)—An iterative development process using prototypes and automated systems development tools (i.e., PowerBuilder and Visual Basic) to speed and structure the development process. B. Modular development is an alternative model for project organization. In modular development, the system proceeds by developing and installing one subsystem (of an entire company system) at a time. Examples of modules might include order entry, sales, and cash receipts.

Statistics in Business Analytics

1. Mean The arithmetic average of a variable. A good measure of central tendency for a normally distributed variable. 2. ModeThe most frequent value in a distribution. May not exist or a distribution may have multiple modes. Often easily seen (and useful) in a histogram. 3. MedianThe middle value in a distribution. A good measure of central tendency (or mass) in skewed distributions Measures of Dispersion 1. Standard deviation (SD, σ)A standardized measure of dispersion (variation) in a variable. In normally distributed data, about 66% of observations are within 1 standard deviation of the mean and about 95% are within 2 standard deviations of the mean. 2. Outlier An unusual and often influential observations Can contribute to nonnormality (i.e., skewness) in a variable. Data Distribution Displays 1. Histogram A graph of the distribution of a variable, grouped into bins (groups). 2. Box plot A plot of the distribution of a variable that indicates the median and quartiles of the distribution. Quantile—Dividing a Distribution into Segments. 1. Quintile Dividing a distribution into fifths. 2. Decile Dividing a distribution into tenths. 3. Quartile Dividing a distribution into quarters. 4. Interquartile range (IRQ) The middle 50% of the distribution. The 3rd minus the 1st quartile. Frequency Distribution—How a Variable Is Distributed 1. Normal distribution Symmetrical, bell-shaped distribution in which the mean and median are usually close to one another. 2. Left-skewed distribution More and/or bigger values on the right (higher) side of the distribution. The median is greater than the mean. 3. Right-skewed distribution More and/or bigger values on the left (lower) side of the distribution. The median is less than the mean. 4. Negative skewness A left-skewed distribution. 5. Positive skewness A right-skewed distribution. ii. Types of regression 1. A time-series regression predicts outcomes that occur over time (e.g., monthly, quarterly, or yearly); for example, predicting monthly sales (y) based on advertising expenditures for the previous month (x). 2. A cross-sectional regression predicts outcomes at one point in time; for example, predicting monthly sales for a retail chain (y) based on the stores' square footage (x). iii. Numbers predicting numbers: Regression uses numeric predictors to predict numeric outcomes.

Job Costing

1. Net realizable value rate=sales value-separable cost(product 1 or 2)/((sales value-separable cost Product 2) + (Sales value-separable cost Product 1)) Then multiply the NRV rate by the joint cost Under a traditional costing system, setup costs are allocated using a cost driver, in this case, direct manufacturing labor hours. The first step is to calculate the setup costs per direct manufacturing labor hour ($60,000 incurred ÷ 25,000 total DMLH) of $2.40. Since two DMLH are needed to produce one unit of product A, the total setup cost per unit of A is ($2.40 × 2 DMLH) $4.80. Under ABC, one batch of product A creates the demand for setup activities that produce value. The setup cost per unit of A under ABC is calculated as the setup cost per batch of A ($1,000) divided by the number of units per batch (100), or, $10.00. DMLH-Direct Manufacturing Labor Hours

Business Continuity Planning

A business continuity plan (BCP) is critical to enabling your business to recover in the event of a natural or human-based disaster, or a disruption of services. Creating a BCP is one element of organizational risk management. Hence, developing a BCP should be part of a broader strategy and approach to addressing significant strategic and business threats and risks. The following six steps present one model of the process for developing a BCP. Step one is to create a business continuity policy and program. Create a framework and structure for the BCP, based on an overall risk management strategy. Also identify the scope of the plan, its key roles, and assign individuals to roles. The next step is to understand and evaluate organizational risks. Identify key organizational activities and processes to determine the activities, and their costs, that are needed to prevent their interruption, and, ensure their restoration in the event of interruption. Also, identify the maximum tolerable interruption periods by function and organizational activity. Step three is to choose business continuity strategies. Define alternative methods to ensure sustainable delivery of products and services. Key decisions will likely include desired recovery times, distance to recovery facilities, required personnel, supporting technologies, and impact on stakeholders. Step four is to develop and complete a BCP response by document and formalizing the BCP plan. Define protocols for defining and handling crisis incidents and create, assign roles to, and train the incidence response team(s). Next, exercise, maintain and review the plan. Test the required technology and implement all proposed recovery processes. Update the plan as business processes and risks evolve. Finally, embed the plan in the organization's culture. Design and deliver education, training and awareness materials to enable effective responses to identified risks. Manage change processes to ensure that the BCP integrates into the organization's culture. Following these steps should enable the creation of a BCP that greatly reduces the threat of key organizational risks disrupting future business success.

operations list

A document that specifies the sequence of steps to follow in making a product, which equipment to use, and how long each step should take.

Program Evaluation and Review Technique (PERT)

A method for analyzing the tasks involved in completing a given project, estimating the time needed to complete each task, and identifying the minimum time needed to complete the total project.

flexible budget

A report showing estimates of what revenues and costs should have been, given the actual level of activity for the period.

Introduction and Reasons for International Activity

Absolute and Comparative Advantage A. Absolute Advantage—From an international economic perspective, absolute advantage exists when a country, business, individual or other entity (hereafter referred to as "entity") can produce a particular good or service more efficiently (with fewer resources) than another entity. When an entity has an absolute advantage, it uses fewer resources to produce a particular good or service than another entity. B. Comparative Advantage—Comparative advantage exists when one entity has the ability to produce a good or service at a lower opportunity cost than the opportunity cost of the good or service for another entity. 1. Opportunity cost—Is the money value of benefits lost from the next best opportunity as the result of choosing another opportunity. If you choose to do one thing, the opportunity cost is the value of the benefit lost by not doing another thing that would have provided the next best benefit. 2. Comparative advantage in the providing of goods or services derives from differences, among other things, in the availability of economic resources, including natural resources, labor and technology, among entities. 3. Entities should specialize in the goods or services they produce at the least opportunity cost. 4. Entities should trade with other entities for goods and services for which they do not have a comparative advantage. 5. Principle of comparative advantage—The total output of two or more entities will be greatest when each produces the goods or services for which it has the lowest opportunity cost. Porter's Four Factors A. Absolute and comparative advantage are based largely on differences in the availability of traditional economic resources, including natural resources, labor, and technology. B. In 1990, Michael Porter proposed that four broad national attributes, including but not limited to the traditional factors of production, promoted or impeded the creation of competitive advantage. The four attributes are: 1. Factor conditions—The extent to which a country has a relative advantage in factors of production, including infrastructure and skilled labor. Through investment and innovation, a country can enhance its factor conditions. 2. Demand conditions—The nature of the domestic demand for an industry's product or service. A strong domestic demand enables firms to devote more attention to a good or service than can firms in countries without a strong domestic demand. 3. Related and supporting industries—The extent to which supplier industries and related industries are internationally competitive. When related and supporting industries are highly developed, a country will have a comparative advantage over countries with less highly developed related and supporting industries. 4. Firm strategy, structure, and rivalry—The conditions governing how companies are created, organized and managed and the nature of domestic rivalry. When a country has companies with well-developed strategies, different organizational structures, and intense domestic rivalry, that country tends to have a competitive advantage. C. Porter also proposed that chance and government policies play a role in the nature of the competitive environment. D. Porter summarized his analysis in the form of a diamond to represent the determinants of national competitive advantage. That diamond is depicted as: look at image. E. According to Porter, the diamond elements, taken together, affect four factors that lead to a national competitive advantage. Those factors are: 1. The availability of resources and skills 2. The information that firms use to decide which opportunities to pursue with those resources and skills 3. The goals of the individuals within the firms 4. The pressure on firms to innovate and invest Australia has an absolute advantage in the use of labor for the production of Good A and Brazil has an absolute advantage in the production of Good B. Absolute advantage exists when an entity (country, business, individual, etc.) can produce a particular good or service more efficiently (with fewer resources) than another entity. When an entity has an absolute advantage, it uses fewer resources to produce a particular good or service than another entity. Australia can produce Good A with half the units of labor (4 units) than can Brazil (8 units), and Brazil can produce Good B with half the units of labor (1 unit) than can Australia (2 units). The opportunity cost of 1 tractor is 2.50 automobiles. In the context of comparative advantage, opportunity cost is the output of a good or service given up as a result of choosing to produce another good or service. By producing 1 tractor the country gives up the ability to produce 2.50 automobiles (opportunity cost), calculated as 10/4 = 2.50 automobiles. As proof (or said another away), if the resources that produce 4 tractors could produce 2.50 automobiles each, the total production would be 10 automobiles.

Inverse relationship.

An inverse relationship would imply that higher returns are associated with less risk.

Multifactor authentication.

Because the system will not use only the user's touch on keyboard, it will also use other authentication metrics (notice the "partially" in the sentence above).

Benchmarking, Best Practices, and the Balanced Scorecard

Benchmarking is a technique of organizational self-assessment via internal and external comparison to sources of excellence in performance. In other words, you try to find someone who is doing it better than you and attempt to emulate their performance. Benchmarking can be done for many dimensions of life. Our concern here is to benchmark business processes. Examples of business processes that may be benchmarked include production, shipping, customer service, accounts payable, payroll, and many, many more. In theory, any process that a business uses is a candidate for benchmarking. A. Benchmarking is a process. These are the typical steps: i. Decide WHAT or WHY—Identify the dimensions of the business to benchmark. Usually, these are "problem" areas or "problem" business processes. Examples of things to benchmark: i. Order fulfillment time 1. Defect rates 2. Employee turnover 3. Employee absenteeism 4. Returned products ii. Determine WHO or WHERE—Identify organizations or parts of the organization that are best-in-class performers. Hence, the label "external" or "internal" benchmarking. If external, be aware that not all organizations care to share data of this type. Some organizations will share only if they are promised the results of the survey. This is a major decision to make in the benchmarking process: Is the organization willing to share the results of its benchmarking efforts with all the participants? Firms may not be willing to take this step, as doing so may reveal that the benchmarking firm may be weak in certain areas and/or dissemination of the data to outside participants will increase the benchmarking workload. iii. Determine HOW—Organizations can and do use surveys, informal conversations, or personal visits. Some firms are in the business of writing benchmarking surveys. Hiring a company to develop a benchmarking survey may result in a more professional survey that may be better received than a home-grown survey, but doing so will increase the cost of the benchmarking effort. iv. GET the data. This may take several weeks, depending on the answers to steps 2 and 3. v. COMPARE the data to the organization's own performance. Identify best practices. Look for gaps. vi. DETERMINE how the gaps can be corrected: a. Change the process b.Change the product or service c.Train the people vii. TAKE necessary steps to CLOSE the gaps. Important Points and Features of Benchmarking 1. A company can't be the best at everything. Benchmarking should be done in the key areas that create a unique competitive advantage as determined by the company's distinctive competencies. 2. Because best practices change over time, benchmarking should be an ongoing process within the organization. In this way, benchmarking supports continuous learning and improvement. Priorities for benchmarking can change over time; expect this. 3. Don't try to focus on improving in every benchmark all the time. 4. Benchmarking is one tool that can be used to help create an atmosphere that supports a learning organization. Learning organizations can remain competitive in volatile business environments because of their ability to evaluate and interpret information and their willingness to embrace change. Learning organizations are characterized by flexibility—a willingness to adopt new ideas and new processes—and efficiency in the acquisition and distribution of information. Human capital is especially important in learning organizations, as it is the source of their creativity and vitality. The balanced scorecard is a best practice. A. What is a balanced scorecard (BSC)? Let's understand what isn't a BSC. Typical measures of business success, as discussed in other lessons, include earnings per share, return on investment, free cash flow, and other financial ratios. These measures are all financial measures. Some people believe that too much emphasis on financial measures is unbalanced, so they invented the BSC. 1. The BSC translates an organization's mission and strategy into a comprehensive set of performance metrics. 2. The BSC does not focus solely on financial metrics. 3. The BSC highlights both financial and nonfinancial metrics that an organization can use to measure strategic progress. B. Four Perspectives of a Typical BSC 1. Financial—Specific measures of financial performance 2. Customer—Performance related to targeted customer and market segments 3. Internal business processes—Performance of the internal operations that create value (i.e., new product development, production, distribution, and after-the-sale customer service) 4. Learning, innovation, and growth—Performance characteristics of the company's personnel and abilities to adapt and respond to change (e.g., employee skills, employee training and certification, employee morale, and employee empowerment) Note Note: The BSC can and should be tailored by industry and company. Some companies may want to add other perspectives, such as environmental impact, community involvement, vendor relations, or other areas of critical importance to those firms. Exam Note: The most common questions regarding a BSC focus on the four perspectives listed above. Candidates should be able to explain one or two performance measures appropriate for each perspective. C. Lead and Lag 1. The theory behind BSC includes an understanding of all four of the common perspectives of the scorecard from a timing perspective. Customer, internal process, and learning perspectives are often measured in real time; these measures will not be audited. These measures can be acted on quickly. As such, these dimensions of a BSC are called LEAD measures. 2. Financial measures, on the other hand, tend to be recorded after the fact. These numbers are audited and typically cannot be changed quickly, if at all. The financial measures are labeled as LAG measures. The theory then says that if the LEAD measures indicate positive performance, this will LEAD to positive financial performance. D. Sharing the BSC—Organizations don't typically publish their BSCs. They could, if they wanted to convince investors that they are using innovation management tools to continuously improve their business. Use of a BSC is completely voluntary. E. The process of building a BSC starts with the organization identifying, for each of the four dimensions of the scorecard, its -Strategic goals -Critical success factors. -Operational tactics -Performance measures Exam Tip Most balanced scorecard questions on the CPA Exam are expected to ask the candidate to identify performance measures associated with one of the four classifications or, conversely, to identify the classification in which a particular performance measure would be found. Definitions of the manufacturing performance measures listed in the Internal Business Processes section (delivery cycle time, manufacturing cycle time, and manufacturing cycle efficiency) sometimes may appear in CPA Exam questions.

opportunity cost

Cost of the next best alternative use of money, time, or resources when one choice is made rather than another is the discounted dollar value of benefits lost from an opportunity as a result of choosing another opportunity.

Cost-Volume-Profit Analysis

Cost-Volume-Profit (Break-Even) Analysis A. Break-even is defined as the sales level at which sales revenues exactly offset total costs, both fixed and variable. Note that total costs include period costs (selling and administrative costs) as well as product (manufacturing) costs. The break-even point is usually expressed in sales units or in sales dollars. B. Basic Formula—The following formula helps define a break-even point. All other formulas can be derived from this basic formula. look at section for formulas Using the Contribution Margin Ratio Approach to Calculate Break-Even in Sales Dollars A. Sometimes no unit sales price or unit variable cost information is available. In these cases, it is not possible to calculate the break-even point in units. It is, however, still possible to calculate the break-even point in sales dollars, but a slightly different approach must be used. B. When no unit information is available, but total sales revenue, total variable costs, and total fixed costs are known, the break-even point in sales dollars can be determined by calculating the contribution margin ratio. The contribution margin ratio represents the percentage of each sales dollar that is available to cover fixed costs. C. For example, if total sales are $100 and variable costs are $40, then the contribution margin is $60. This means that for every $100 of sales, $60 is available to cover fixed costs. D. If we express the contribution margin as a ratio (or percentage) of sales dollars, then we can say that 60% ($60/$100) of each sales dollar is available to cover fixed costs. Multiple Product Analysis—Sometimes break-even analysis will require calculations that involve a product mix rather than just one product. In that case we can use either a weighted-average approach or a composite (sometimes called a basket or package) approach to calculate a break-even point for each of the products. As with single-product calculations, we can calculate a break-even in either units or dollars. Perhaps the most intuitive approach is the composite method. This is because we mostly use the same process to solving the problems as we do with single-product scenarios. An Alternative Way of Looking at the Contribution Margin Ratio Although the calculations are the same, many people find that using the common-size income statement format to calculate the contribution margin ratio makes it easier to solve these questions. Exam Tip As mentioned in the introduction to this section, virtually all break-even questions involving calculations can be solved by using one of the two formulas using the contribution margin approach: 1. Break-Even Units = Fixed Costs / Contribution Margin per Unit 2. Break-Even in Sales Dollars = Fixed Costs / Contribution Margin Ratio We expect a significant number of questions on break-even analysis on the exam, so be sure that you know these formulas! Margin of Safety—This indicates the difference between the current sales level and the break-even point. That is, the margin of safety indicates how much revenue can decrease before operating income becomes negative. Similar to break-even or profit, margin of safety can be expressed in either units or dollars. For example, if sales are currently 200,000 units and the break-even point is 150,000 units, the margin of safety would be 50,000 units. Alternatively, where sales are $180,000 and the break-even point is $110,000, the margin of safety would be $70,000. Targeted Profit—When a targeted pretax profit beyond break-even is specified, simply add this amount to the fixed cost in the numerator. You can think about the contribution margin on the denominator as having to cover all items in the numerator. This is exactly the same formula as break-even, but at the break-even level there is no profit (i.e., only fixed costs are covered by CM). Note: If the profit goal is stated in after-tax dollars, one must first determine the amount of pretax profit required to generate the desired after-tax profit. To determine the pretax profit required, simply divide the desired after-tax profit by 1 minus the tax rate. Sales in Units = (Fixed Costs + Targeted Pretax Profit) / Contribution Margin per Unit Underlying Assumptions—In order to perform break-even analysis, certain assumptions must hold true. First of all, for break-even analysis to be relevant and useful, the analysis must be restricted to a relevant range of activity, so that model assumptions are at least approximately satisfied, namely: fixed costs, unit variable costs, and price must behave as constants. In addition: 1. All relationships are linear. 2. When multiple products are sold, the product mix remains constant. (Note: This is not a restrictive assumption of the model, but this condition is widely assumed in practice for problems on the CPA Exam.) 3. There are no changes in inventory levels; that is, the number of units sold equals the number of units produced. a. Total costs can be divided into a fixed component and a component that is variable with respect to the level of output. b. Volume is the only driver of costs and revenues. c. The model applies to operating income (i.e., the CVP model is a before-tax model). Volume-Profit Chart 1. Another variation of the break-even chart graphs profits (revenues less variable costs and fixed costs) instead of separately graphing revenues, fixed costs, variable costs, and total costs. In this graph, the slope of the profit line is equal to the contribution margin: for each unit sold, income increases by the amount of the contribution margin per unit. 2. When no units are sold, the loss is equal to fixed costs. As units are sold, losses decrease by the contribution margin times the number of units sold. At the point where the profit line crosses the x-axis, profits are zero. This is the break-even point. 3. A few additional observations about the volume-profit chart: a. The flatter the line, the smaller the contribution margin per unit. b. When comparing profit lines for multiple years and assuming that the sales price has not changed, variable costs per unit for the steeper line are less than variable costs per unit for the flatter line. Steeper lines indicate larger contribution margins; if the sales price is constant, then variable costs must be relatively smaller. c. Changes in the profit line's y-intercept indicate changes in fixed costs. Note For problems involving taxes and after-tax income, remember: The CVP model is a before-tax or operating income-based model. If you remember this, conversion to after-tax or net income is easy. Merely perform the necessary calculations while using operating income and then convert to after-tax as required. Exam Tip The CPA Exam is likely to test CVP by requiring the candidate to determine the break-even point or income after changing one of the variables involved. We predict that questions will often change something and ask you to determine the effect on those. The effect on income (i.e., increase or decrease) is almost always opposite that of the effect on the break-even point. The only exception to that is where the only change that is made involves an increase or decrease in quantity. In this instance, income will go up or down but the break-even point will remain the same. To calculate the breakeven point, we must first find the fixed cost of the prior year. Fixed costs (FC) / contribution margin (CM) = breakeven point in units. Thus, using prior year data, FC / ($7.50 - $2.25) = 20,000 units. Solving for FC = $105,000. Current year FC = 1.1(prior year FC) = $115,500; thus, breakeven units for the current year = $115,500 / ($9 - $3) = $19,250. Given sales of $5,000,000 and total variable costs of 1,750,000, the contribution margin (CM) is the difference of $3,250,000. Then the CM is divided by the units: $3,250,000 / 250,000 units = $13 CM per unit. From here, the BE point in units is equal to the total fixed costs divided by the CM per unit: $650,000 / $13 = 50,000 units. Solving this problem requires working backwards through the contribution margin (CM) formatted income statement to determing Total CM. The Total CM is then divided by the CM per unit to determine the number of units sold. Two key points: 1. Total Fixed Costs equals the CM at breakeven, thus: 20,000 breakeven units X ($7.50 sales price - $2.25 VC per unit) = $105,000 Fixed Costs. 2. The 40% Income Tax Rate means that Net Income is equal to 60% of Operating Income, calculated as: Operating Income x 60% = $5,040 or Operating Income = $5,040 / 60% = $8,400. Next, adding Fixed Costs to Operating Income = Total CM. Thus, $105,000 + $8,400 = $113,400 Total CM. The calculation of total units sold = Total CM/CM per unit. Thus, $113,400 / $5.25 = $21,600 total units sold. Finally, adding 1,000 units to the units sold in year 1 = 21,600 + 1,000 = 22,600 units expected to be sold in year 2. The margin of safety is the difference between current sales and breakeven sales. Thus, breakeven sales are $120,000 ($200,000 - $80,000). In other words, the firm has breathing room of $80,000 of sales. Sales could fall by this amount before the firm would dip below breakeven. breakeven sales = Fixed cost/contribution margin percentage At the current level of 10,000 units, a contribution margin per unit of $35 = $85 - $50, and fixed costs of $300,000, the contribution margin is $350,000 and the operating income is $50,000. If variable costs increase by 20%, the contribution margin per unit decreases to $25 = $35 - $60, or $300,000 total, resulting in an operating loss of $50,000. Thus, profits would decrease by $100,000. This problem compares the increase in revenue due to the possible increased spending on advertising. The $15,000 for advertising is just another fixed cost. The contribution margin ratio is used to determine 40% of the new revenue of $780,000 = $312,000 resulting in only $12,000 more in contribution margin as compared to a new fixed advertising cost $15,000. The difference between the $15,000 and the $12,000 is a $3,000 decrease in income. Breakeven sales=fixed cost/contribution margin ratio $800,000=$100,000/cmr .125=cmr The margin of safety is the difference between current sales and breakeven sales. Thus, breakeven sales are $120,000 ($200,000 - $80,000). In other words, the firm has breathing room of $80,000 of sales. Sales could fall by this amount before the firm would dip below breakeven. breakeven sales = Fixed cost/contribution margin percentage $120,000= Fixed cost/.20 $24,000= Fixed cost

Business Cycles and Indicators

Cyclical Economic Behavior—"Business cycles" is the term used to describe the cumulative fluctuations (up and down) in aggregate real GDP, which generally last for two or more years. These increases and decreases in real GDP tend to recur over time, though with no consistent pattern of length (duration) or magnitude (intensity). These increases and decreases also tend to impact individual industries at somewhat different times and with different intensities. Components of Business Cycle—The following terms are used to refer to components of the business cycle: a. Peak—A point in the economic cycle that marks the end of rising aggregate output and the beginning of a decline in output (the Business Peak in the graph). b. Trough—A point in the economic cycle that marks the end of a decline in aggregate output and the beginning of an increase in output (the Recessionary Trough in the graph). c. Economic Expansion or Expansionary Period—Periods during which aggregate output is increasing (periods from Trough to Peak in the graph); normally of longer duration than recessionary periods. d. Economic Contraction or Recessionary Period—Periods during which aggregate output is decreasing (periods from Peak to Trough in the graph); normally of shorter duration than expansionary periods. Different sectors and industries in the economy perform best in different stages of the business cycle. 1. Consumer staples and utilities are two sectors that continue to perform well in the contraction and recession phases. Examples would be food, drugs, cosmetics, tobacco, liquor, electricity, gas, and water. These goods tend to be necessities or represent a low fraction of the consumer budget. Staples and utilities have very low income elasticity; that is, there is little change in demand in relation to the change in income. 2. Cyclicals and energy do well in the early expansion stage. Examples of cyclical are savings and loans, banking, advertising, apparel, retailers, and autos. Financial firms do well because interest rates are low and rising while business and consumer borrowing grows. Other sectors do well here if they have high income elasticity; that is, the demand rises with an increase in income. 3. Basic materials and technology sectors perform well as the expansion continues. These sectors include chemicals, plastics, paper, wood, metals, semiconductors, computer hardware, and communication equipment. 4. In the late expansion and boom stage, the capital goods, financial firms, and transportation sectors do well. These sectors have high income elasticity and tend to do well when durable goods replacement increases. Examples include machinery and equipment manufacturers, airlines, trucking, railroads and corporate or institutional banking. Recession Defined 1. There is no official quantitative definition of a recession. 2. The National Bureau of Economic Research (NBER) defines recession as "a significant decline in economic activity spread across the country, lasting more than a few months, normally visible in real GDP growth, real personal income, employment (non-farm payrolls), industrial production and wholesale-retail sales." 3. The NBER uses that definition to establish when the U.S. economy is in a recessionary period (recession). 4. Quantitative guidelines used frequently by others (but which are not official) include: a. A period of two or more consecutive quarters in which there is a decline in real GDP growth. b. An economic downturn in which real GDP declines by 10% or less. Depression Defined 1. There is no official quantitative definition of an economic "depression." 2. The NBER does not separately identify a circumstance or time period as being a depression. 3. Economists in general refer to a depression as an economic downturn (negative GDP) that is severe and/or long term. 4. Quantitative guidelines used by economists (but which are not official) include: a. A decline in real GDP exceeding 10%. b. A decline in real GDP lasting two or more years. As previously noted, declines in consumer and business spending may be caused by such factors as: 1. Tax increases; 2. Declining confidence in the economy; and 3. Rising interest rates and/or more difficult borrowing. Leading, Coincident, and Lagging Indicators of Business Cycles A. In an effort to anticipate changes in the business cycle, economists and business groups have attempted to establish relationships between changes in the business cycle and other measures of economic activity that occur before a change in the business cycle. These measures of economic activity (which change before the aggregate business cycle) are called "leading indicators" and include measures of: 1. Consumer expectations 2. Initial claims for unemployment 3. Weekly manufacturing hours 4. Stock prices 5. Building permits 6. New orders for consumer goods 7. New order for manufactured capital goods 8. Real money supply B. Measures of economic activity associated with changes in the business cycle that occur at approximately the same time as the economy as a whole changes are called "coincident indicators". These measures provide information about the current state of the economy and also may help identify the timing of peaks and troughs of the business cycle after they occur. Coincident indicators include measures of: 1. Level of retail sales 2. Current unemployment rate 3. Level of industrial production 4. Number of nonagricultural employees 5. Personal income C. Measures of economic activity associated with changes in the business cycle, but which occur after changes in the business cycle, are called lagging or trailing indicators. These lagging indicators are used to confirm elements of business cycle timing and magnitude. Lagging indicators include measures of: 1. Changes in labor cost per unit of output 2. Relationship between inventories and sales 3. Duration of unemployment 4. Commercial loans outstanding 5. Relationship between consumer installment credit and personal income

hierarchy of data in a system

Data hierarchy refers to the systematic organization of data, often in a hierarchical form. Data organization involves characters, fields, records, files and so on. This is the order of the hierarchy: characters, fields, records, files

Echo check

Data sent to and from another device twice. Sender compares 2 sets of data, if the sets of data are different, an error has occurred.

reorder point for an item of inventory

Determining the level of stock (inventory) at which the inventory should be reordered is a function of the minimum level of inventory to be maintained, referred to as the safety stock, and the length of time it takes to receive inventory after it is ordered, referred to as the lead-time or delivery-time stock. Both the safety stock and the lead-time stock are based on the rate of inventory usage. The calculation of the reorder point would be: Reorder point = safety stock + delivery-time stock The cost of inventory does not enter into the determination of the reorder point (but it does enter into the optimum quantity to reorder).

Elasticity

Elasticity: Measures the percentage change in a market factor (e.g., demand) seen as a result of a given percentage change in another market factor (e.g., price). Elasticity of Demand—Elasticity of demand (ED) measures the percentage change in quantity of a commodity demanded as a result of a given percentage change in the price of the commodity. Therefore, it is computed as: ED = % change in quantity demanded / % change in price This formula expresses the slope of the demand curve when showing demand graphically. Expanded, the formula is: ED = (Change in quantity demanded/Quantity demanded) / (Change in price/Price) Three different values can be used as the denominators in the calculation: 1. The prechange quantity demanded and price—Elasticity is measured at a point on the demand curve—the original quantity and price. 2. The average of the old and new quantity and price—Elasticity is measured as the average of the demand curve; called the "midpoint" or "arc" method. 3. The new quantity and price—Elasticity is measured at a point on the demand curve—the new quantity and price. 1. Using the Prechange Values: Assume that as a result of a change in price from $1.50 to $2.00, demand decreased from 1,500 units to 1,200 units. Using the old (prechange) quantity and price, the calculation would be: % change in quantity: 1,500 − 1,200 = 300/1,500 = .20 % change in price: $2.00 − $1.50 = $ .50/$1.50 = .333 ED = .20/.333 = .60 (Alternate Calculation: ED = 300/1,500 × $1.50/$ .50 = $450/$750 = .60) 2. Using the Average Values (midpoint or arc method): Assume that as a result of a change in price from $1.50 to $2.00, demand decreased from 1,500 units to 1,200 units. Using the average of the old and new quantities and prices, the calculation would be: Average change in quantity = (1,500 + 1,200)/2 = 1,350 Average change in price = ($1.50 + $2.00)/2 = $1.75 % change in quantity: 1,500 − 1,200 = 300/1,350 = .222 % change in price: $2.00 − $1.50 = $ .50/$1.75 = .285 ED = .222/.285 = .778 Availability of substitutes—The more substitutes there are for a good/service, the more elastic the demand for that good/service will be. When there are substitutes available, consumers can switch to an alternative good/service, resulting in a more elastic demand for the good/service for which there was a price change. When there are virtually no substitutes for a good/service, demand will be inelastic. There are few substitutes for gasoline; therefore, it is price inelastic. Extent of necessity—The more necessary a good/service, the more inelastic the demand for that good/service will be. A good/service that is highly necessary will be more insensitive to price changes than a good/service that is a luxury. Critical healthcare is necessary; therefore, it is highly price inelastic. Share of disposable income—The larger the share of disposable income devoted to a good/service, the more elastic the demand for the goods/service will be. When a good/service consumes a large part of disposable income, consumers tend to be more sensitive to price changes, making demand more elastic. Postchange time horizon—The longer the time following a price change for a good/service, the more elastic demand for the good/service tends to be. Over time, consumers are able to gain more information about substitute goods/services, more alternatives may become available, and constraints on consumer switching (e. g., contracts, prepayments, etc.) will expire. As a consequence, consumers may adjust their buying behavior, thus increasing the elasticity for the good/service for which there was a price change. 1. If demand for a good or service is inelastic, then the firm can increase its selling price with less negative financial impact. 2. On the other hand, if demand for a firm's good or service is highly elastic, it cannot increase its selling price without significant negative financial impact. Therefore, it should consider alternative ways of addressing expected increases in its cost of inputs. Elasticity of Supply—Elasticity of supply (ES) measures the percentage change in the quantity of a commodity supplied as a result of a given percentage change in the price of the commodity; therefore, it is computed as: ES = % change in quantity supplied / % change in price This formula expresses the slope of the supply curve when showing the supply graphically. Expanded, the formula is: ES = (Change in quantity supplied / Prechange quantity supplied) / (Change in price / Prechange price) Elasticity of Other Market Factors—In addition to measurement of elasticity of demand (and related total revenue) and elasticity of supply, other measures of elasticity include: Cross Elasticity of Demand—Measures the percentage change in quantity of a commodity demanded as a result of a given percentage change in the price of another commodity.The formula for cross elasticity of demand (XED) is:XED = % change in quantity demanded of Y / % change in price of X The effect on the quantity demanded for Coca-Cola as a result of a given change in the price of Pepsi Cola. Income Elasticity of Demand—Measures the percentage change in quantity of a commodity demanded as a result of a given percentage change in income. 1. The formula for the income elasticity of demand (IED) is:IED = % change in quantity demanded / % change in consumer income Greater than 1 - Elastic =1 Unitary Less than 1- Inelastic

Encryption

Encryption is the process of transforming information using an algorithm to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key (source: Wikipedia). Encryption technology uses a mathematical algorithm to translate cleartext (plaintext) - text that can be read and understood - into ciphertext (text which has been mathematically scrambled so that its meaning cannot be determined). A key is then required to translate the ciphertext back into plaintext. An effective implementation of encryption can guard against risks to privacy, i.e., protection of data against unauthorized access, and authentication, i.e., user identification. Hence, well designed and implemented encryption would be useful in lessening the likelihood of the theft of credit card numbers and personal information from her website. Symmetric encryption -- also called Single-key encryption or private-key encryption uses a single algorithm to encrypt and decrypt the text. The sender uses the encryption algorithm to create the ciphertext and sends the encrypted text to the recipient; the sender must also let the recipient know which algorithm was used to encrypt the text; the recipient then uses the same algorithm (essentially running it in reverse) to decrypt the text. Asymmetric encryption -- also called public/private-key encryption and private-key encryption uses two paired encryption algorithms to encrypt and decrypt the text. If the public key is used to encrypt the text, the private key must be used to decrypt the text; conversely, if the private key is used to encrypt the text, the public key must be used to decrypt the text. To acquire a public/private key pair, the user applies to a certificate authority (CA); the CA registers the public key on its server and sends the private key to the user; when someone wants to communicate securely with the user, they access the public key from the CA server, encrypt the message and send it to the user; the user then uses the private key to decrypt the message. Although the ciphertext created with symmetric encryption can be very secure, the symmetric encryption methodology itself is inherently insecure because the sender must always find a way to let the recipient know which encryption algorithm to use. Asymmetric encryption is more complicated, cumbersome, and secure. With asymmetric encryption the transmission is more secure because only the private key can decrypt the message and only the user has access to the private key. Hence, well designed asymmetric encryption offers a higher level of security but would also demand more effort from Hogsbath's customers. In addition, as computing moves towards ubiquitous or mobile computing (e.g., m-commerce), asymmetric encryption can create compatibility problems since the certificate authority system may not yet be adapted to the latest technology platforms. Examples, as of this writing, of recently offered technologies - that may not support asymmetric encryption - are the Ipad and the iphone4. To summarize, your online customers may desire the level and type of assurance that is provided by encryption. Specifically, encryption can be useful in reducing consumer concerns about credit card number and identity theft in online transactions. A number of alternatives exist for implementing encryption technology into online transactions. The best alternative for your business would need to be assessed based upon the level of encryption you desire and the corresponding costs associated with that level of encryption. Sincerely, Accountant

Risk assessment

Enterprise Risk Management (ERM) framework provides a systematic, comprehensive framework for understanding, identifying, and analyzing organizational risks. The risk assessment processes identify and analyze risks to the achievement of business objectives. After identification, an organization's risk assessment process will measure and prioritize risks (by assessing their likelihood and impact) in relation to organizational objectives. Because every enterprise faces risks from both internal and external sources, senior management and the board of directors must create a system that assesses relevant risks, aligns the organization's risk appetite and strategy to the chosen risks, defines and clarifies the responses to these risks, and reduces the costs and losses of these risks. Risk management processes should also identify organization-wide and cross-enterprise risks, identify and act on opportunities created by potential events and opportunities, and help management to better determine its needs for and allocation of capital and human assets. Risk can usefully be decomposed into two parts: the likelihood of a loss and the amount of loss, should one occur. The expected value of a loss is the product of these components (i.e., the likelihood multiplied times loss). To summarize, the COSO ERM framework provides a very important and useful way of thinking about, and managing, organizational risks. I look forward to discussing these issues. Sincerely, Future CPA

Flexible budgeting

Flexible budgeting is a budget that adjusts (or flexes) for changes in the volume of activity. Flexible budgets are most frequently used for manufacturing and sales activities. Unlike the master budget (which is the static budget), the flexible budget adjusts revenues and some costs when actual sales volume is different from planned sales volume. This makes it easier to analyze actual performance because actual revenues and costs can be compared to expected revenues and costs at the actual level of sales activity. Differences between the flexible budget and the master budget are known as sales activity variances or volume variances. Flexible budgeting is especially beneficial for seasonal expenses and irregular earnings. Some disadvantages are that flexible budgets are more complicated, they can be manipulated, and they are less disciplined.

Introduction to COSO, Internal Control, and the COSO Cube

General Control Objectives—Several accounting pronouncements have identified the following as general objectives of internal control: 1. Safeguard assets of the firm. 2. Promote efficiency of the firm's operations. 3. Measure compliance with management's prescribed policies and procedures. 4. Ensure accuracy and reliability of accounting records and information: a. Identify and record all valid transactions. b.Provide timely information in appropriate detail to permit proper classification and financial reporting. c. Accurately measure the financial value of transactions. d. Accurately record transactions in the time period in which they occurred. What is Internal Control? The Five Components—The first dimension identifies five fundamental components of an internal control system: 1. Control (also called the internal) environment—Management's philosophy toward controls, organizational structure, system of authority and responsibility, personnel practices, policies, and procedures. This component is the core or foundation of any system of internal control. 2. Risk assessment—The process of identifying, analyzing, and managing the risks involved in achieving the organization's objectives. 3. Information and communication—The information and communication systems that enable an organization's people to identify, process, and exchange the information needed to manage and control operations. 4. Monitoring—To ensure the ongoing reliability of information, it is necessary to monitor and test the system and its data. 5. Control activities—The policies and procedures that ensure that actions are taken to address the risks related to the achievement of management's objectives. 6. The COSO model depicts these activities in a pyramid structure. Why Do We Have Internal Control? The Three Objectives—The second dimension of the cube (horizontal space in the first diagram in this lesson) identifies the three fundamental objectives of a system of internal control. These are: 1. Operations—The effective and efficient use of an organization's resources in pursuit of its core mission 2. Reporting—Preparing and disseminating timely and reliable information, including financial and nonfinancial information, and internal and external reports. 3. Compliance—Complying with applicable laws and regulations 4. Caveats and cautions about organizational objectives—Objectives are unique to entities and their operating environments. Objectives and controls may overlap and support one another. For example, strong controls over cash are likely to increase the reliability of financial reports about cash.

UTILITY

Go by the total cost divided by the utils. When total utility is maximized, the marginal utility (MU) of the last dollar spent on each and every item acquired must be the same. Thus, total utility is maximized when: MU of beers/price of beers = MU of pizza/price of pizza. Using the values given: 100 utils/$2.00 = MU of pizza/$10.00. The equation for beers = 100/$2 = 50 utils per dollar. The MU of pizza also must be 50 utils per dollar. Therefore, 50 = MU of pizza × $10 = 500 utils.

ERM framework roles

Governance and Culture-Governance is the allocation of roles, authorities, and responsibilities among stakeholders including attracting, retaining, and developing capable individuals. The listed activities are part of COSO ERM Principle 5, which relates to attracting, retaining, and developing capable individuals. Performance-This component is concerned with risk identification and assessment that helps an organization achieve its strategy and business objectives. The listed activities are part of COSO ERM Principle 5, which relates to attracting, retaining, and developing capable individuals. Strategy and Objective-Setting-In contrast, Strategy and Objective-Setting concerns analyzing the business context, defining risk appetite, evaluating business strategies, and formulating business objectives. Information, Communication, and Reporting-Communication is the continual, iterative process of obtaining and sharing information to facilitate and enhance ERM. This function includes reporting on the organization's risk, culture, and performance. The listed activities are the analysis of the business context, which occurs in the Strategy and Objective-Setting component of ERM, not in the Information, Communication, and Reporting component.

Organizational Continuity Planning and Disaster Recovery

I. Organizational (Business) Continuity Planning—The disaster recovery plan relates to organizational processes and structures that will enable an organization to recover from a disaster.Business (or organizational) continuity management (sometimesabbreviated BCM) is the process of planning for such occurrences and embedding this plan in an organization's culture. Hence, BCM is one element of organizational risk management. It consists of identifying events that may threaten an organization's ability to deliverproducts and services, and creating a structure that ensures smooth and continuous operations in the event the identified risks occur. One six-step model of this process (from the Business Continuity Institute) is A. Create a BCM Policy and Program—Create a framework and structure around which the BCM is created. This includes defining the scope of the BCM plan, identifying roles in this plan, and assigning roles to individuals. B. Understand and Evaluate Organizational Risks—Identifying the importance of activities and processes is critical to determining needed costs to prevent interruption, and, ensure their restoration in the event of interruption. A business impact analysis (BIA) will identify the maximum tolerable interruption periods by function and organizational activity. C. Determine Business Continuity Strategies—Having defined the critical activities and tolerable interruption periods, define alternative methods to ensure sustainable delivery of products and services. Key decisions related to the strategy include desired recovery times, distance to recovery facilities, required personnel, supporting technologies, and impact on stakeholders. D. Develop and Implement a BCM Response—Document and formalize the BCM plan. Define protocols for defining and handling crisis incidents. Create, assign roles to, and train the incident response team(s). E. Exercise, Maintain, and Review the Plan—Exercising the plan involves testing the required technology, and implementing all aspects of the recovery process. Maintenance and review require updating the plan as business processes and risks evolve. F. Embed the BCM in the Organization's Culture—Design and deliver education, training and awareness materials that enable effective responses to identified risks. Manage change processes to ensure that the BCM becomes a part of the organization's culture. G. The following figure illustrates the prioritization of BCP risks by the importance of the function to the organization's mission. Risk prioritization would be part of the second phase of BCM. Disaster Recovery Plans (DRPs) A. DRPs enable organizations to recover from disasters and continue operations. They are integral to an organization's system of internal control. DRP processes include maintaining program and data files and enabling transaction processing facilities. In addition to backup data files, DRPs must identify mission-critical tasks and ensure that processing for these tasks can continue with virtually no interruptions, at an affordable cost. 1. Examples of natural disasters include fires, floods, earthquakes, tornadoes, ice storms, and windstorms. Examples of human-induced disasters include terrorist attacks, software failures (e.g., American Airlines recent flight control system failure), power plant failures, and explosions (e.g., Chernobyl), chemical spills, gas leaks, and fires. B. Two Important Goals of Disaster Recovery Planning 1. The recovery point objective (RPO) defines the acceptable amount of data lost in an incident. Typically, it is stated in hours, and defines the regularity of backups. For example, one organization might set an RPO of one minute, meaning that backups would occur every minute, and up to one minute of data might need to be re-entered into the system. Another organization, or the same organization in relation to a less mission-critical system, might set an RPO of six hours. 2. The recovery time objective (RTO) defines the acceptable downtime for a system, or, less commonly, of an organization. It specifies the longest acceptable time for a system to be inoperable. C. Disaster recovery plans are classified by the types of backup facilities, the time required to resume processing, and the organizational relations of the site: 1. Cold site ("empty shell")—An off-site location that has all the electrical connections and other physical requirements for data processing, but does not have the actual equipment or files. Cold sites often require one to three days to be made operational. A cold site is the least expensive type of alternative processing facility available to the organization. If on a mobile unit (e.g., a truck bed), it is called a mobile cold site. 2. Warm site—A location where the business can relocate to after the disaster that is already stocked with computer hardware similar to that of the original site, but does not contain backed-up copies of data and information. If on a mobile unit, it is called a mobile warm site. 3. Hot site a. An off-site location completely equipped to quickly resume data processing. b. All equipment plus backup copies of essential data files and programs are often at the site. c. Enables resumed operations with minimal disruption, typically within a few hours. d. More expensive than warm and cold sites. 4. Mirrored site—Fully redundant, fully staffed, and fully equipped site with real-time data replication of mission-critical systems. Such sites are expensive and used for mission-critical systems (e.g., credit card processing at VISA and MasterCard). 5. Reciprocal agreement—An agreement between two or more organizations (with compatible computer facilities) to aid each other with data processing needs in the event of a disaster. Also called a "mutual aid pact." May be cold, warm, or hot. 6. Internal site—Large organizations (e.g., Walmart) with multiple data processing centers often rely upon their own sites for backup in the event of a disaster.

annual interest rate of forgoing the cash discount

If a firm purchases raw materials from its supplier on a 2/10, net 40, cash discount basis, the equivalent annual interest rate (using a 360-day year) of forgoing the cash discount and making payment on the 40th day is: The annual interest rate of forgoing the cash discount is calculated as: [Discount %/(1.00 - Discount %)] × [360/(40 - 10)] For the facts given, the calculation would be: [.02/(1.00 - .02)] × [360/30] = .02041 × 12 = .2449 (or 24.49%)

Introduction to Economic Concepts

In most cases, in economics, the independent variable is shown on the vertical axis (called the Y-axis, vertical axis) and the dependent variable is shown on the horizontal axis (called the X-axis, horizontal axis). Microeconomics—Studies the economic activities of distinct decision-making entities, including individuals, households, and business firms. Major areas of interest include demand and supply, prices and outputs, and the effects of external forces on the economic activities of these individual decision makers. Macroeconomics—Studies the economic activities and outcomes of a group of entities taken together, typically of an entire nation or major sectors of a national economy. Major areas of interest include aggregate output, aggregate demand and supply, price and employment levels, national income, governmental policies and regulation, and international implications. International economics—Studies economic activities that occur between nations and outcomes that result from these activities. Major areas of concern include reasons for international economic activity, socioeconomic issues, balance of payments accounts, currency exchange rates, international transfer pricing, and globalization, which is a consequence of widespread international economic activity. The basic equation for a straight-line plot on a graph can be expressed as: U = y + q (x) Where: U = unknown value of the variable y being determined and/or plotted. y = the value of the plotted line where it crosses the Y'axis; called the "intercept" (or "Y-intercept"). In economic graphs, this is commonly the value of Y where X = 0. q = the value by which the value of y changes as each unit of the x variable changes; this expresses the slope of the line being plotted. x = the value (number of units) of the variable x. Note: Any letters can be used to represent the two variables being plotted, and the expression can be rearranged. For example, the same formula could be, and sometimes is, written as: Y = mx + b Where: Y = unknown value of Y. m = slope of the plotted line. x = value of the variable x. b = Y-intercept. example: TC = FC + VC (Units) Where: TC = total cost. FC = fixed cost (incurred independent of the level of production; the Y-intercept). VC = variable cost per unit of variable X produced; the change in total cost as the number of units of variable X are produced (also, the slope of the total cost line). Units = number of units of the variable X produced. If the value of FC and VC are known, TC can be computed and plotted for any number of levels of production (units). Command Economic System—A system in which the government largely determines the production, distribution, and consumption of goods and services. Communism and socialism are prime examples of command economic systems. Market (Free-Enterprise) Economic System—A system in which individuals, businesses, and other distinct entities determine production, distribution, and consumption in an open (free) market. Capitalism is the prime example of a market economic system. The relationship between variables may be positive, negative, or neutral, as shown in the following graphs: 1. Positive—The dependent variable moves in the same direction as the independent variable. 2. Negative—The dependent variable moves in the opposite direction from the independent variable. 3. Neutral—One variable does not change as the other variable changes. (This indicates that the variables are not interdependent.) 4. When the independent variable is time, the vertical axis shows the behavior of the dependent variable over time and is called a "time series graph." The vertical axis of an economic graph is not referred to as the X axis, but rather as the Y axis. The horizontal axis is referred to as the X axis. example: if there are intercepts or price coefficient you need to add them to the formula Quantity demanded=a+bx=Intercept+(price(x)-price coefficient)

Pricing Strategies

Low cost and product differentiation strategy As you requested, this memorandum is designed to discuss some alternative strategies that may be implemented by Urton Corp. Historically, Urton has implemented a product differentiation strategy. This strategy involves providing products that have superior physical characteristics, perceived differences, or support service differences, which allows the products to command higher prices in the market. When effective, this strategy allows the company to effectively compete with companies that sell lower priced products. For a product differentiation strategy to be successful, the company must continue to invest in the differentiating factor. Since Urton is no longer effectively competing using a product differentiating strategy, the management should consider whether additional investment in product innovation, support services, or brand identity might allow the company to revive the strategy. On the other hand, if management believes that pursuing a differentiation strategy is no longer feasible, consideration should be given to a cost leadership strategy. Pursuing a cost leadership strategy would involve cutting costs and improving efficiency to allow the company to offer products at lower prices. To be competitive, it is essential that the company select a strategy and begin to align management's decisions with that strategy. If you need any additional information, please contact me.

Macro-Environmental Analysis

Macro-Environmental Analysis—Within the context of an economic system and economic market structure, an entity must carry out an assessment of the characteristics of the macro-environment in which it operates (or may operate). Such an analysis is essential to understanding the nature of the operating environment and making entity-wide decisions related to that operating environment. PEST analysis (or a variation of PEST) provides a framework for carrying out such an external macro-environmental analysis. PEST Analysis—PEST analysis is an assessment of the Political, Economic, Social and Technological elements of a macro-environment. Its purpose is to provide an understanding of those elements of an environment, typically a country or region, in which a firm operates or is considering operating. A. Analysis Factors—PEST considers each of the following kinds of factors to develop a "picture" of an operating environment: 1. Political factors—Concerned with the nature of the political environment, and the ways and the extent to which a government intervenes in its economy, including consideration of such things as: a. Political stability b. Labor laws c. Environmental laws d. Tax policy e. Trade restrictions, tariffs, and import quotas 2. Economic factors—Concerned with the economic characteristics of the operating environment, including such things as: a. Economic growth rate b. Interest rates c. Inflation rate d. Currency exchange rates 3. Social factors—Concerned with the culture and values of the operating environment, including such considerations as: a. Population growth rate b. Age distribution c. Educational attainment and career attitudes d. Emphasis on health and safety 4. Technology factors—Concerned with the nature and level of technology in the operating environment, including such considerations as: a. Level of research and development activity b. State of automation capability c. Level of technological "savvy" d. Rate of technological change Variations of the basic PEST model consider other macro-environment factors: 1. PEST EL adds two additional elements: a. E = Environmental factors, which include such things as: i. Weather ii. Climate and climate change iii. Water and air quality b. L = Legal factors, which include such things as: i. Discrimination law ii. Consumer law iii. Employment law iv. Antitrust law v. Health and safety law 2. STEER, another variation, which identifies the same kinds of factors as other macro- environmental models—Socio-cultural, Technological, Economic, Ecological, and Regulator factors. D. Importance of Macro-Environmental Factors 1. The importance of the factors assessed in PEST analysis (or a variation thereof) will be unique to each analysis. 2. PEST, or comparable analysis, is particularly important in considering the establishment of operations in a new foreign location. 3. The outcome of a PEST-type analysis can provide inputs for SWOT analysis (considered in the "Entity/Environment Relationship Analysis" lesson).

EXAM NOTE

Make sure you are putting in the TBS the numbers and accounts in order from Largest to smallest. For example: 1. Miscellaneous income $5,000 2. Supplies $1,000 3. Salary $3,000 In order: 1. Miscellaneous income 2. Salary 3. Supplies

Summary of Market Structure

Perfect competition—While a perfectly competitive segment of the U.S. economy may not exist in today's sociopolitical environment, the framework of a perfectly competitive market provides a useful model for understanding fundamental economic concepts and for evaluating other market structures. Monopoly—Exists where there is a single provider of a good or service for which there are no close substitutes. Monopolistic firms do exist in the U.S. economy. Historically, public utilities have been permitted to operate as monopolies with the justification that market demand can be fully satisfied at a lower cost by one firm than by two or more firms. To limit the economic benefits of such monopolies, governments generally impose regulations, which affect pricing, output and/or profits. Monopolies also can exist as a result of exclusive ownership of raw materials or patent rights. In most cases, however, exclusive ownership monopolies are of short duration as a result of the development of close substitutes, the expiration of rights, or government regulation. Monopolistic competition—Common in the U.S. economy, especially in general retailing where there are many firms selling similar (but not identical) goods and services. Because their products are similar, monopolistic competitive firms engage in extensive non-price competition, including advertising, promotion, and customer service initiatives, all of which are common in the contemporary U.S. economy. Oligopoly—Exists in markets where there are few providers of a good or service. Such markets exist for a number of industries in the U.S. The markets for many metals (steel, aluminum, copper, etc.) are oligopolistic. So also are the markets for such diverse products as automobiles, cigarettes and oil. Firms in oligopolistic markets tend to avoid price competition for fear of creating a price war, but do rely heavily on non-price competition A natural monopoly results from conditions in which there are increasing returns to scale, such that a single firm can produce at a lower cost than two or more firms. Typically, fixed costs are extremely high, making it inefficient for a second firm to enter the market.

Prime cost

Prime cost is the sum of direct materials and direct manufacturing labor. Direct manufacturing labor is $60,000. Direct materials used must be computed. The solutions approach is to enter the information given into the materials T-account and solve for the unknown: Direct Materials Control 3/1/11 bal. 36,000 Purchases 84,000 ? Materials used 3/30/12 30,000 Using the T-account above, direct materials used are easily computed as $90,000. Thus, prime cost incurred was $150,000 ($90,000 + $60,000).

required rate of return

Required rate =Risk-free rate + Beta(Expected rate - Risk-free rate), or Required rate =.02 + 1.4(.09 - .02), or Required rate =.02 + 1.4(.07), or Required rate =.02 + .098, orRequired rate =.118, or 11.8%

Weighted average (Process Costing)

Solving for the EU of production will depend on which method is being used: weighted average or first in, first out (FIFO). A. Weighted average uses only two categories: goods completed and ending inventory. The format will differ in that weighted average combines prior period work (i.e., beginning inventory) with current period work (units finished this period not in [BI]) to determine "goods completed." You should notice several things about the format presented just above. (image) a. Physical units are the same (in total) regardless of the EU method used. b. Exam questions stating percentage of completion for the ending inventory EU calculation are likely to be communicated in terms of how much dollar-equivalent work was completed in the current period. However, the percentage of completion for beginning inventory is typically stated in terms of how much dollar-equivalent work was completed in the prior period. This is often confusing to candidates. To make this easier to understand, think about the fact that the FIFO method is interested in calculating current-period information separately from prior-period information. As such, FIFO wants to know the EU of work done on the beginning inventory this current period. That is why you are required to use the complement (100% - 10%) in the calculation of beginning inventory EU. c. Regarding the percentage of completion multiplier: For the weighted average method, the goods completed amount will always be 100% complete. For the FIFO method, the units started and finished will always be 100% complete. Ending inventory will be the same equivalent units amount for both methods. Physical units will, of course, be the same regardless of the method used to calculate equivalent units. d. The ultimate goal in calculating equivalent units is to segregate the WIP inventory account between (1) work finished and transferred out and (2) ending WIP inventory. e. A T-account can be used to check your work and to help display how these two pieces of WIP are relevant. The following T-account provides an example. We know how the physical units are divided from the 50,000 units available shown on the left below, and we will use the EU concept to determine (1) the cost of units transferred out and (2) the value of the ending WIP inventory shown in the T-account. Step 2—Determine the cost per equivalent unit. First determine the total costs to account for; the following costs are usually accumulated during the period: a. Beginning WIP costs—The total costs of production (material, labor, and overhead) that were allocated to the production units during previous periods. b. Transferred-in costs—The costs of production (material, labor, and overhead) from previous departments that flow with the production items from department to department. c. Current-period costs—The transfer costs and costs of production (material, labor, and OH) added to the WIP during the current period. d. Total costs to account for—The total of the beginning WIP costs, the transfer-in costs, and the current period costs. Total costs must be allocated to ending WIP inventory and to FG inventory at the end of the period. Weighted average cost flow—Under the weighted-average cost flow assumption, the beginning WIP inventory costs are added to the current period costs (including transfer-in costs, if any) before dividing by the EU figure. This process averages the two cost pools together.

replacing an old machine with a new machine

The carrying amount of the old machine is a sunk cost; the cost of that machine has already been incurred and is not economically relevant to Buff's decision whether or not to replace the old machine with a new one. Buff's decision should not be affected by the historical (sunk) cost of the old machine. The disposal value of the new machine (appropriately discounted) is relevant to Buff's decision whether or not to replace the old machine with the new machine. Specifically, the disposal value of the new machine (at the end of its useful life) is an element that enters into the determination of the net cost of the new machine, and it is an element that will not occur if the new machine is not acquired.

Quality Management

The concept of quality as used in total quality management (TQM) often differs from the traditional concept of quality. "Quality" is most commonly used to refer to "grade." For example, a platinum bracelet is usually considered to be of higher quality than a silver bracelet, and a Mercedes is usually considered to be a higher-quality vehicle than a Ford. Quality of Conformance: Refers to the degree to which a product meets its design specifications and/or customer expectations. TQM relies on quality of conformance as evidenced by the fact that the TQM philosophy includes measuring results frequently. That is, if results are measured frequently, then certain expectations are considered in evaluating the results. Quality of conformance (or conformance to specifications) is more certain. On the other hand, conformance to a product design that the customer does not want is destined to fail. Therefore, quality of design is also important. Thus, quality addresses two perspectives in TQM: 1. Failure to execute the product design as specified. 2. Failure to design the product appropriately; quality of design is defined as meeting or exceeding the needs and wants of customers. Exam Tip There is likely to be at least one question about TQM in Operations Management, and it is likely to ask the candidate to match a cost of quality (e.g., cost of using better-quality materials, cost of scrap, cost of sales returns, etc.) with its appropriate category (i.e., prevention costs, appraisal costs, internal failure costs, or external failure costs). Cost of Quality A. The costs incurred by an organization to ensure that its products and/or services have a high quality of conformance are known as costs of quality. B. Cost of quality is based on the philosophy that failures have an underlying cause, prevention is cheaper than failures, and cost of quality performance can be measured. Cost of quality consists of four components—(1) prevention cost, (2) appraisal cost, (3) internal failure cost, and (4) external failure cost. Each of these categories is explained in detail below. 1. Prevention cost—The cost of prevention is the cost of any quality activity designed to help do the job right the first time. 2. Appraisal cost—The cost of quality control including testing and inspection. It involves any activity designed to appraise, test, or check for defective products. 3. Internal failure cost—The costs incurred when substandard products are produced but discovered before shipment to the customer. 4. External failure cost. The cost incurred for products that do not meet requirements of the customer and have reached the customer. Exam Tip There is likely to be at least one question about TQM in Operations Management, and it is likely to ask the candidate to match a cost of quality (e.g., cost of using better-quality materials, cost of scrap, cost of sales returns, etc.) with its appropriate category (i.e., prevention costs, appraisal costs, internal failure costs, or external failure costs). Total Cost of Quality and Quality Cost Behavior A. An organization's total cost of quality is the sum of its prevention, appraisal, internal failure, and external failure costs. There is inverse trade-off between the cost of failure (internal or external), and the cost of prevention and appraisal in determining the total quality of conformance: 1. When the overall quality of conformance is low, more of the total cost of quality is typically related to cost of failure. For example, a manufacturer substitutes a lower-quality power cord connection on one of its products with the result that, after a short period of use, the power cords break, rendering the product unusable. This problem causes the quality of conformance for the product to be lower and the cost of external failure to be higher. 2. Increases in the cost of prevention and the cost of appraisal are usually accompanied by decreases in the cost of failure and increases in the quality of conformance. Continuing with the power cord example, if the manufacturer increased the amount of testing completed before the product was shipped, more defective products would be discovered. This would increase the cost of internal failure but decrease the cost of external failure. Since the cost of an external failure is normally greater per unit than the cost of an internal failure (i.e., it is less expensive to identify a faulty product before it has left the factory than to replace or refund a faulty product in the hands of a consumer or distributor), the overall cost of failure decreases. Six-Sigma Quality—What Is Six-Sigma? Six Sigma:A statistical measure expressing how close a product comes to its quality goal. One-sigma means 68% of products are acceptable; three-sigma means 99.7% of products are acceptable. Six-sigma is 99.999997% perfect: 3.4 defects per million parts. A. Six-Sigma Black Belts must attend a minimum of four months of training in statistical and other quality improvement methods. Six-Sigma black belts are experts in the Six-Sigma methodology. They learn and demonstrate proficiency in the DMAIC methodology and statistical process control (SPC) techniques within that methodology. DMAIC is the structured methodology for process improvement within the Six-Sigma framework. It stands for Define, Measure, Analyze, Improve, and Control. Quality Tools and Methods A. Total Quality Control (TQC)—The application of quality principles to all company activities. Also known as total quality management (TQM). B. Continuous Improvement and Kaizen Continuous Improvement (CI) seeks continual improvement of machinery, materials, labor, and production methods, through various means including suggestions and ideas from employees and customers. Kaizen:The Japanese art of continuous improvement. It is a philosophy of continuous improvement of working practices that underlies total quality management and just-in-time business techniques. PDCA (Plan-Do-Check-Act) also called the Deming Wheel: Focuses on the sequential and continual nature of the CI process. Cause-and-effect (fishbone or Ishikawa) diagrams: Identify the potential causes of defects. Four categories of potential causes of failure are: human factors, methods and design factors, machine-related factors, and materials and components factors. Cause-and-effect diagrams are used to systematically list the different causes that can be attributed to a problem (or an effect). Such diagrams can aid in identifying the reasons why a process goes out of control. Pareto chart:A bar graph that ranks causes of process variations by the degree of impact on quality. The Pareto chart is a specialized version of a histogram that ranks the categories in the chart from most frequent to least frequent. A related concept, the Pareto Principle,states that 80% of the problems come from 20% of the causes. The Pareto Principle states: "Not all of the causes of a particular phenomenon occur with the same frequency or with the same impact." Control charts: Statistical plots derived from measuring factory processes; they help detect "process drift," or deviation, before it generates defects. Control charts also help spot inherent variations in manufacturing processes that designers must account for to achieve "robust design." Robust design: A discipline for making designs "production-proof" by building in tolerances for manufacturing variables that are known to be unavoidable. Poka-yoke (mistake-proofing): Poka-yoke involves making the workplace mistake-proof. For example, a machine fitted with guide rails permits a part to be worked on in just one way.

bond value

The expected issue price of each bond is $114.68∼$115.00. This value is determined as the present value of all future cash flows from the bond. Specifically, it would include the present value of the annual interest payments plus the present value of the face value of the bond to be received at maturity, both discounted at the market rate of interest. The calculation is:PV of annual interest payments=($100×.08)×PVannuity(n=10;6%)=$8.00×7.360=$58.88PV of face maturity value=$100×PV(n=10;6%)=$100×0.558=55.80Total PV=$114.68=$115rounded------------------------------------------------------

ERM Components, Principles, and Terms

The five components of the ERM framework are: 1. Governance and Culture—These are the cornerstones for the other ERM components. Governance is the allocation of roles, authorities, and responsibilities among stakeholders, the board, and management. An organization's culture is its core values, including how the organization understands and manages risk. 2. Strategy and Objective-Setting—ERM must integrate with strategic planning and objective setting. For example, an organization's risk appetite is partly a function of its strategy. Business objectives are the practical implementation of a chosen risk appetite and strategy. 3. Performance—The "Introduction to COSO Enterprise Risk Management: Strategy and Risk" lesson gives examples of performance measures. Risk identification and assessment is concerned with developing an organization's ability to achieve its strategy and business objectives, as measured by performance. 4. Review and Revision—Periodic and continuous review and revision of ERM processes enables an organization to increase the value of its ERM function. 5. Information, Communication, and Reporting—Communication is the continual, iterative process of obtaining and sharing information to facilitate and enhance ERM. This function includes reporting on the organization's risk, culture, and performance. Business context—The trends, events, relationships and other factors that may influence, clarify, or change an entity's current and future strategy and business objectives. Culture—An entity's core values, including its attitudes, behaviors, and understanding about risk. Governance—The allocation of roles, authorities, and responsibilities among stakeholders, the board, and management. Some aspects of governance fall outside ERM (e.g., board member recruiting and evaluation; developing the entity's mission, vision, and core values). Practices—The methods and approaches deployed within an entity relating to managing risk. Risk—The possibility that events will occur and affect the achievement of objectives. Risk appetite—The types and amount of risk, on a broad level, an organization is willing to accept in pursuit of value. Risk capacity—The maximum amount of risk that an entity can absorb in the pursuit of strategy and business objectives. Risk ceiling—The maximum level of risk established by an entity. Risk floor—The minimum level of risk established by an entity. Risk profile—A composite view of the risk assumed at a level of the entity, or aspect of the business that positions management to consider the types, severity, and interdependencies of risks, and how they may affect performance relative to the strategy and business objectives. Risk range—The acceptable level of risk (highest to lowest) established by the organization. Similar to tolerance, but tolerance is a measure of performance while risk range is a statement about (or measure of) risk. Severity—The impact of events or the time it would take to recover. Target risk—The desired level of risk set by an entity. Tolerance—The boundaries of acceptable variation in performance related to achieving business objectives. Like risk range but risk range is a statement (or measure) of risk while tolerance is a measure of performance. Uncertainty—The state of not knowing how or if potential events may manifest.

Direct relationship

There is a direct (positive) relationship between risk and return. Higher returns are associated with higher degrees of risk.

Net Realizable Value example

This answer is correct because net realizable value (NRV) is the predicted selling price in the ordinary course of business less reasonably predictable costs of completion and disposal. The joint cost of $54,000 is reduced by the NRV of the by-product ($4,000) to get the allocable joint cost ($50,000). The computation is Sales value at split-offWeightingJoint costs allocated Kul. $40,000 $40,000/$75,000 × $50,000 $26,667 Wu. 35,000. $35,000/$75,000 × $50,000 23,333 $75,000 $50,000 Therefore, $26,667 of the joint cost should be allocated to product Kul. Net Present Value: Correct! The net investment in working capital would be recognized as a cash outflow of $12,000, and the recovery of the working capital at the end of the project would be recognized at its present value of $6,809. That present value would be computed as $12,000 × PV of $1 for 5 years at 12%, or $12,000 × 0.5674 = $6,809.

FIFO vs LIFO

This memorandum explains the advantages and disadvantages of the FIFO and LIFO inventory methods, including the conditions under which LIFO might produce advantages over FIFO. In addition, this memorandum describes the reasons that I believe ABC, Inc. should change from the FIFO to the LIFO method of inventory When the FIFO method of inventory is used, the cost of the earliest inventory acquired is the first inventory cost that is recognized as cost of goods sold each period. This method of recognizing inventory cost has the effect of matching the cost of the earliest acquired inventory with current revenue. During a period of rising prices, this would have the disadvantage of matching lower earlier cost of inventory with a higher current sales price, resulting in a higher gross profit than would occur under the LIFO method. And, if the FIFO method is used for taxes purposes, other things being equal, the higher gross profit would result in a higher taxable income and higher income tax expense. The primary advantage of using the FIFO method is that, because the earliest cost incurred is expensed, the remaining cost is the most recent cost, resulting in the inventory value shown on the balance sheet more closely approximating current cost than would occur under the LIFO method. The LIFO method of inventory has the advantage of closely matching the most recent cost of inventory with the current revenue from the sale of inventory. As a consequence, in a period of rising prices, reported gross profit, taxable income and income tax expense will be lower than under the FIFO inventory method. Because the most recent inventory cost is expensed, the LIFO method has the disadvantage of reporting the remaining inventory asset at it earliest cost which, during a period of rising prices, would be less than the current cost of that inventory. In addition, it should be noted that, if the LIFO method is used for tax purposes, IRS regulations require that it also be used for financial reporting purposes. Since ABC's inventory tends to follow a last in, first out physical flow, and in view of the tax advantage associated with the LIFO inventory method, I believe that LIFO is the better of the methods for ABC, Inc. Therefore, I recommend we switch from using FIFO to using LIFO for inventory costing purposes. Such a change would result in our cost of goods sold being more closely aligned with the physical flow of our inventory. In addition, since prices tend to increase rather than decrease, the use of the LIFO inventory method likely will result in lower taxable income and lower tax expense for reporting purposes. Finally, the use of LIFO rather than FIFO will result in both a higher cash flow, resulting from a lower tax payment, and a higher inventory turnover ratio, resulting from both a higher cost of goods sold and a lower reported inventory value. Please let me know if I can provide additional information about the two inventory costing methods and the basis for my recommendation that ABC, Inc. switch from using FIFO inventory to LIFO inventory.

Net present value of TBS

Use the tables of the present value tables: Cash Outflow (-240,000) Cash Inflow 80 80 80 90 60 Ordinary annuity for 80xPresent value of Ordinary annuity table for 3 years 90x single present value table a single present value for year 4 60x single present value table single present value for year 5 NPV=Outflow-Present Value of cash inflows

price-earnings ratio

Using the price-earnings ratio, the estimated value of your client's firm is $54,000. The value of a closely held (nonpublic) firm can be determined by multiplying its current earnings per share by the price-earnings (P/E) ratio of comparable publicly traded firms for which information is available. From the facts given, the P/E ratio for the comparable firms can be determined as follows:P/Eratio=price$6.00/earnings$0.50=12.0P/Eratio By multiplying the client firm's earnings per share (EPS) by the P/E ratio for the comparable group of firms, we can determine a value for a share of the client firm's stock. That calculation is:Per share value=EPS$0.45×P/Eratio of comparable firms12.0=$5.40per share value With 10,000 shares outstanding, the client firm's value would be:10,000shares×$5.40per share=$54,000total value Using the capital asset pricing model (CAPM), the required rate of return that the firm must earn in order to be considered an acceptable investment is 13.8%, rounded to 14% for grading purposes. The formula for the capital asset pricing model (CAPM) is:Required(minimum)rate=RFR+[B(ERR−RFR)]

Factory overhead account

You take the estimated factory overhead and divide by the number of direct labor hours to get your estimated per labor hour cost. Then you take the estimated hours and subtract by the actual labor hours, take the difference and multiple by the overhead rate and add to the estimated factory overhead, then subtract by the total factory overhead balance. Est. Fact OH rate/Est. Direct Labor Hr=Overhead rate per hour Estimate hours-actual hours=difference Difference*Overhead rate per hour=total OH difference Total OH difference+Est OH-Actual OH=Under or overapplied factory OH rate This answer is correct because the predetermined overhead rate (POR) is calculated as follows: $5.10 = $510,000 estimated factory overhead 100,000 direct manuf. labor hours Calculation of applied overhead (105,000 standard DMLH x $5.10 POR) Applied overhead 535,500 Actual overhead (105,000 standard DMLH × $5.10 POR) 540,000 Underapplied OH 4,500 Therefore, actual overhead exceeds applied overhead by $4,500 ($540,000 - $535,500).

systemic error

error that tends toward being either too high or too low

Return on assets

net income/average total assets his answer is correct. Return on assets is equal to net income divided by average total assets. In this case, ROA is equal to 7.2% [$80,000 / ($1,230,000 + 1,000,000) / 2].

source code comparison program

software that compares the current version of a program with its source code; differences should have been properly authorized and correctly incorporated

Audit Committee Responsibilities(SOX)

•Section 301 requires that the audit committee be directly responsible for appointing compensating and overseeing the work of the external auditors -Audit committee must have a financial expert . -Requirement under SOX for publicly traded companies. Example: The Enron-era scandals convinced Congress that something was amiss in U.S. corporate governance. Therefore, many of the reforms contained in the Sarbanes-Oxley Act of 2002 (SOX) were aimed at improving that governance, despite the fact that this is an area of the law traditionally relegated to the states. One of the most significant of these changes is that Congress mandated a major alteration in the operation and responsibilities of the audit committees of public companies. SOX requires public companies to create audit committees composed entirely of independent directors who are not officers of the company and do not have other significant ties to the firm. In addition, the audit committees should contain at least one "financial expert" who has the experience and knowledge necessary to evaluate the financial statements. SOX makes four changes that render these audit committees powerful and influential. Almost all of the Enron-era scandals involved huge accounting frauds. There was evidence that CEOs and CFOs of major companies frequently pressured audit firms into accepting inappropriate treatments of various transactions and structures. Arguably, they had leverage to do this because officers hired, fired, and compensated outside auditors. Therefore, SOX first requires that auditors be selected, evaluated, and terminated by the independent directors composing the audit committee. Second, because conflicts of interest arise when supposedly independent auditors also provide consulting services to their audit clients, SOX prohibits auditors from performing most consulting services for public company audit clients. Those that are permitted, such as tax services, must be disclosed to, and preapproved by, the audit committee. Third, because several Enron-era scandals involved situations where boards of directors were not informed of disputes that company officers had with external auditors regarding the appropriateness of certain accounting treatments, SOX requires that the outside auditors report to the audit committee regarding (a) all critical accounting policies and practices to be used; (b) all alternative treatments discussed with management and their ramifications; and (c) other material communications between the auditor and management, such as a schedule of unadjusted differences. Fourth, remembering the role that Sherron Watkins played in disclosing the Enron frauds, Congress instructed in SOX that audit committees are to create procedures for receiving, retaining, and treating complaints about accounting procedures and internal controls and for protecting the confidentiality of these whistleblowers. In summary, Congress used SOX to refashion the audit committee of public companies and gave these committees substantial authority to guard the integrity of the financial statements issued by these companies. Thank you, Future CPA


Ensembles d'études connexes

Hardware Install & Maintenance Practice Final Exam

View Set

Adult health Chapter 37: Nursing Management: Patients With Immunodeficiency, HIV Infection, and AIDS

View Set

Ch. 11 Understanding Statistics in Research

View Set

Chapter 1-Completing the App, Underwriting, and Delivering the Policy

View Set

Market Management College MidTerm

View Set