Information Technology

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

General Ledger, Reporting, and Financing Cycle

Core Activities A. Receive Inputs 1. The other cycles generate transactions that flow to the general ledger. 2. Financing and investing—Receive information about financing and investing activities. 3. Budgeting—The budget department provides budget numbers, primarily for managerial accounting reports. B. Create and Post Adjusting Entries C. Reporting 1. Prepare financial statements (e.g., balance sheet, income statement, statement of cash flows). 2. Produce managerial reports. Inherent Risks A. Lease accounting, loan covenants and related-party transactions can be problematic in relation to financing activities. 1. Enron created hundreds of sketchy (dubious) special-purpose entities to enable supposed joint ventures with other companies. Their primary purpose was to loot the organization and to hide money from creditors, investors, and auditors. 2. Do we have obligations (e.g., loans) to major shareholders or officers that are unrecorded? B. Financial statement fraud 1. Unusual or unjustified manual adjusting entries may be an indicator of management attempting to "manage" earnings. Relevant Stakeholders A. Creditors, Shareholders, Regulators, External Auditors—For financial reporting Important Forms (in Electronic Systems), Documents (in Paper Systems), and Files A. See the table later in this lesson. Accounting and Business Risks and Controls A. Controls over This Cycle B. Journal Entry Risks C. Control and Subsidiary Account Risks D. Financial Reporting

Expenditure Cycle

Core Activities A. Request and Authorize Purchase 1. Request goods and services. 2. Authorize purchase. B. Acquire Goods 1. Purchase goods/services. C. Take Custody and Pay for Goods 1. Receive goods and services.Disburse cash. D. Return Needed? If so, return it and document the return. Inherent Risks A. An organization buys something. Should it be capitalized or expensed? 1. In some cases, management has some discretion, according to standards, regarding when purchased items should be capitalized or expensed. 2. Abuses of this discretion: An important part of the WorldCom fraud was capitalizing expenses (i.e., inappropriately recording expenses as assets). B. Do we have unrecorded liabilities? 1. For example, do we have obligations (e.g., loans) to major shareholders or officers that are unrecorded? Relevant Stakeholders A.Suppliers—Supply products or services. 1. Establishing that suppliers exist and that their bills are legitimate and accurate are important controls. B. Inbound Logistics Providers—Deliver needed goods or services (e.g., FedEx, Air Express, rail companies). Important Forms (in Electronic Systems), Documents (in Paper Systems), and Files See the table at the end of this lesson. Accounting and Business Risks and Controls A. Request and Authorize Purchase B. Acquire Goods C. Take Custody D. Pay for Goods E. Return of Goods Needed?—If so, return goods and document the return.

IT Policies

Important IT Policies A. Values and Service Culture—What is expected of IT function personnel in their interactions with clients and others B. Contractors, Employees, and Sourcing—Why, when, and how an entity selects IT human resources from among employees or outside contractors (i.e., an IT sourcing and outsourcing policy) C. Electronic Communications Use—Policy related to employee use of the Internet, intranet, email, blogs, chat rooms, and telephones D. Use and Connection Policy—Policy that states the entity's position on the use of personal devices and applications in the workplace and connection to the entity's systems. May also specify (here or in a separate policy) allowable devices and uses of these devices on the entity's systems. E. Procurement—Policy on the procurement processes for obtaining IT services F. Quality—Statement of IT performance standards G. Regulatory Compliance—Statement of regulatory requirements of IT systems (e.g., in banking or investment systems) or related to data privacy H. Security—Policy related to guarding against physical or electronic threats to IT. May include disaster recovery preparation policies I. Service Management and Operational Service Problem Solving—Policies for ensuring the quality of live IT services IV. Policy Monitoring A. Policies should be monitored for compliance and success. B. Policy monitoring may include internal auditing staff. C. Monitoring may be continuous or periodic, depending on policy importance and the risks of noncompliance. D. Analysis of IT help calls and operational reports can provide evidence of policy noncompliance.

Managing Cyber Risk: Part I—Applying COSO Principles to Cyber Risk

Principle 6—The organization specifies objectives with sufficient clarity to enable the identification and assessment of risks relating to objectives. Principles 7 and 8—Risk Identification and Fraud Principle 7—The organization identifies risks to the achievement of its objectives across the entity and analyzes risks in order to determine how the risks should be managed. Principle 8—The organization considers the potential for fraud in assessing risks to the achievement of objectives. Principle 9—The organization identifies and assesses changes that could significantly impact the system of internal control. Control Activities to Address Cyber Risks Principle 10—The organization selects and develops control activities that contribute to the mitigation of risks to the achievement of objectives to acceptable levels. Principle 11—The organization selects and develops general control activities over technology to support the achievement of objectives. Principle 12—The organization deploys control activities through policies that establish what is expected and procedures that put policies into action. A defense-in-depth approach—While cyber risks cannot be avoided, they can be managed through careful design and implementation of appropriate controls. Because cyber breaches are inevitable, control structures should be deployed in a layered approach that prevents intruders from freely roaming the information systems after the initial layers of defense are compromised. Communicating about Cyber Risks and Controls Principle 13—The organization obtains or generates and uses relevant, quality information to support the functioning of internal control. Principle 14—The organization internally communicates information, including objectives and responsibilities for internal control, necessary to support the functioning of internal control. Principle 15—The organization communicates with external parties regarding matters affecting the functioning of internal control.

The Accounting System Cycle: Introduction

The Accounting Cycle as a Set of Accounting Procedures A. This is the simple way to characterize the accounting cycle. Students usually learn this view in their first accounting class. In this view, the accounting cycle is the competent, timely execution of the following procedures: 1. Analyze transactions and business documents. 2. Journalize transactions. 3. Post journal entries to accounts. 4. Determine account balances and prepare a trial balance. 5. Journalize and post adjusting entries. 6. Prepare financial statements and reports. 7. Journalize and post closing entries. 8. Balance the accounts and prepare a post-closing trial balance. 9. Repeat. B. Another, even simpler way to characterize the activities of an accounting system is as a set of data processing steps: 1. Data capture (input) (steps 1 and 2 above and the journalizing parts of steps 5 and 7) 2. Data storage and data processing (steps 3-5 and 7 and 8 above) 3. Information output/reporting (step 6) 1. Revenue cycle—Interactions with customers (give goods; get cash) 2. Expenditure cycle—Interactions with suppliers (give cash; get goods) 3. Production cycle—Give labor and raw materials; get finished product. 4. Human resources/payroll cycle—Hire, use, and develop labor; give cash and benefits. 5. General ledger, reporting, financing cycle—Give cash; get cash; report financial outcomes. Risks that Are Common Across Cycles—Some risks are general to accounting systems. These include: A. Loss, Alteration, or Unauthorized Disclosure of Data 1. This risk and related controls are discussed in several other BEC lessons (e.g., file backups, file labels, access controls, modification of default settings on ERP systems to increase security, encryption, and secure transmissions). B. The accounting system is not functioning as required by law, regulation, or organizational policy. 1. This risk, and related controls, are discussed in the "Internal Control Monitoring Purpose and Terminology" lesson. Control Goals that Are Common Across Cycles—These goals are also discussed in lessons in the Auditing and Attestation (AUD) section of CPAExcel®. A. All transactions are properly authorized. B. All recorded transactions are valid. C. All valid and authorized transactions are recorded. D. All transactions are recorded accurately. E. Assets are safeguarded from loss or theft. F. Business activities are performed efficiently and effectively. G. The organization complies with all applicable laws and regulations. H. All financial disclosures are full and fair. I. Accurate data is available when needed.

Statistics in Business Analytics

1. Mean The arithmetic average of a variable. A good measure of central tendency for a normally distributed variable. 2. ModeThe most frequent value in a distribution. May not exist or a distribution may have multiple modes. Often easily seen (and useful) in a histogram. 3. MedianThe middle value in a distribution. A good measure of central tendency (or mass) in skewed distributions Measures of Dispersion 1. Standard deviation (SD, σ)A standardized measure of dispersion (variation) in a variable. In normally distributed data, about 66% of observations are within 1 standard deviation of the mean and about 95% are within 2 standard deviations of the mean. 2. Outlier An unusual and often influential observations Can contribute to nonnormality (i.e., skewness) in a variable. Data Distribution Displays 1. Histogram A graph of the distribution of a variable, grouped into bins (groups). 2. Box plot A plot of the distribution of a variable that indicates the median and quartiles of the distribution. Quantile—Dividing a Distribution into Segments. 1. Quintile Dividing a distribution into fifths. 2. Decile Dividing a distribution into tenths. 3. Quartile Dividing a distribution into quarters. 4. Interquartile range (IRQ) The middle 50% of the distribution. The 3rd minus the 1st quartile. Frequency Distribution—How a Variable Is Distributed 1. Normal distribution Symmetrical, bell-shaped distribution in which the mean and median are usually close to one another. 2. Left-skewed distribution More and/or bigger values on the right (higher) side of the distribution. The median is greater than the mean. 3. Right-skewed distribution More and/or bigger values on the left (lower) side of the distribution. The median is less than the mean. 4. Negative skewness A left-skewed distribution. 5. Positive skewness A right-skewed distribution.

Emerging Technologies in AIS

COSO's Principle #11 (out of 17): "The organization selects and develops general control activities over technology to support the achievement of objectives." 1. Emerging Payment Processing Systems What Are Payment Systems?—Credit cards, Apple Pay, Samsung Pay, Walmart Pay, PayPal, Venmo, Amazon "One-Click" payments 2. Internet of Things (IoT) A. The Internet of things (IoT) is the widespread connection of electronic devices to the Internet. Recent news reports suggest that hackers are increasingly targeting (i.e., hacking) IoT applications. 1. The result is a real-time data stream that enables the monitoring and control of any device that is electronic. Gartner, an IT research and consulting company, estimates that by 2020 there will be over 26 billion devices connected to the Internet. 2. Directed at physical processes (e.g., consumer applications, manufacturing, mining, energy production, and logistics and distribution) a. Receive a real-time data feed from any electronic device 3. Automated IT Security: Authentication The Goal for User Authentication—Fully integrated, multifactor security, automated systems 1. The adoption of the IoT will lead to increasing use of automated security systems. 2. Authentication in these systems will use multiple identifiers. Identifiers may include: a. Biometrics (e.g., fingerprints, iris scans, body scans, facial recognition) b. Advanced analytics that identify system use patterns (e.g., typical login times, pressure and force on keyboard and mouse) c. Objects (e.g., cell phones, key cards) d. Knowledge (challenge questions) e. Contextual patterns of use that combine the above identifiers using artificial intelligence (AI). 4. Head-Mounted Displays (HMDs) What Is an HMD? 1. HMDs are display devices that are worn on the head (like glasses) or as a part of a helmet. They include one or two small optical displays. 2. HMDs will be fully or partially "immersive." That is, they will provide augmented (partially) or virtual (fully) reality experiences that immerse the user in a digitized experience (e.g., Google Glass). 3. Risks—Digital distraction and information overload 5. Gamification A. What is it? 1. Gamification is the application of video gaming principles in simulations, or the use of badges and points as motivators, to engage users in learning content that is essential for their jobs. B. What is its value? 1. Gamification "makes learning fun again" (to paraphrase Donald Trump). It uses psychological principles based in graphics, design, images, motivation, and narrative (stories) to simulate actual scenarios that are relevant to users' jobs. 2. Evidence suggests that users learn and retain more from gamification than traditional learning approaches.

Information Systems Hardware

A. Central Processing Unit (CPU)—The CPU is the control center of the computer system. The CPU has three principal components: 1. Control unit—Interprets program instructions. 2. Arithmetic logic unit (ALU)—Performs arithmetic calculations. 3. Primary storage (main memory)—Stores programs and data while they are in use. It is divided into two main parts: a. Random access memory (RAM)—Stores data temporarily while it is in process. b. Read-only memory (ROM)—A semi-permanent data store for instructions that are closely linked to hardware (e.g., "firmware"). Includes portions of the operating system. Hard to change. B. Secondary Storage Devices—Provide permanent storage for programs and data; depending on their configuration these devices can either be online (the data on the device is available for immediate access by the CPU) or offline (the device is stored in an area where the data is not accessible to the CPU). 1. Magnetic disks—Random access devices: data can be stored on and retrieved from the disk in any order. A common form of secondary storage. Also called "hard disks" or "hard disk drives" (HDDs). 2. Magnetic tape—Magnetic tape is a sequential access device: Data is stored in order of the primary record key (i.e., document number, customer number, inventory number, etc.) and is retrieved sequentially; although once used for transaction processing, now mostly used in data archives. 3. Optical disks—Use laser technology to "burn" data on the disk (although some rewritable disks use magnetic technology to record data); in general, read-only and write-once optical disks are more stable storage media than magnetic disks; optical disks, like magnetic disks, are random access devices. There are several different types of optical disks. a. RAID (redundant array of independent [previously, inexpensive] disks)—A technology for storing redundant data on multiple magnetic disks. Its purpose is to reduce the likelihood of data loss. b. Solid state drives (SSDs)—Use microchips to store data and require no moving parts for read/ write operations. SSDs are faster and more expensive per gigabyte than CDs, DVDs, and HDDs. SSDs are increasingly used instead of HDDs in microcomputers, but cost and limited capacity have constrained their adoption. SSDs that are pluggable are often called "thumb drives," "flash drives," or "USB drives" (because they use a USB interface to plug into other devices). c. cloud-based storage—Also called "Storage as a Service" (SaaS). This type of storage is hosted offsite, typically by third parties, and is accessed via the Internet. See the CPAExcel® lesson, "Introduction to Enterprise-Wide and Cloud-Based Systems" on this topic. C. Peripherals—Devices that transfer data to or from the CPU but do not take part in processing data; peripherals are commonly known as input and output devices (I/O devices). 1. Input devices—Instruct the CPU and supply data to be processed. Examples include keyboard, mouse, trackball, touch-screen technology, point-of-sale (POS) scanners. 2. Output devices—Transfer data from the processing unit to other formats. For example: printers, plotters, monitors, flat panel displays, cathode ray tube (CRT) displays. D. Common Types of Computers—Computers are often categorized according to their processing capacity and use. 1. Supercomputers—Computers at the leading edge of processing capacity; their definition is constantly changing as the supercomputer of today often becomes the personal computer of tomorrow; generally used for calculation-intensive scientific applications, for example, weather forecasting and climate research. 2. Mainframe computers—Powerful computers used by commercial organizations to support mission-critical tasks such as sales and order processing, inventory management, and e-commerce applications. Unlike supercomputers, which tend to support processor-intensive activities (i.e., a small number of highly complex calculations), mainframe computers tend to be input and output (I/O) intensive (i.e., a very large number of simple transactions). Frequently support thousands of simultaneous users. 3. Servers (high-end and mid-range)—Computers that are specifically constructed to "serve" thousands of users on a client/server computer network. May have some of the control features of mainframe computers but run slower and cost less. 4. Personal computers (PCs) or workstations—Designed for individual users and typically include word processing and spreadsheet software, and, network connectivity to enable people to check their Facebook pages. Sometimes also called a "fat client." 5. Thin client computers—Computers with minimal capabilities (e.g., slow processing speed, small amount of storage) that is used to access resources on a network system. 6. Laptop computers—Light, portable computers that include a greater risk of theft than a PC or workstation. 7. Tablet computers—Lighter and less powerful than laptops but bigger and more powerful than mobile devices. 8. Mobile computing devices—Include smartphones, handheld devices, and, increasingly, wearable computing technologies (e.g., Fitbit). Ubiquitous computing devices that, like thin client computers, depend on network connections for their value and power.

E-Commerce Applications

A. Customer Relationship Management (CRM)—Technologies used to manage relationships with clients; biographic and transaction information about existing and potential customers is collected and stored in a database; the CRM provides tools to analyze the information and develop personalized marketing plans for individual customers. 1. Primary objective of a CRM system—To retain current customers and gain new customers 2. A CRM system includes the technology software designed for current and potential clients that is essentially a database of all data related to customers. Used properly with the right processes, a CRM system can further sales, marketing, and customer service interactions. Common features of CRM software include: a. Sales force automation—Tracking contacts and follow-ups with customers or potential customers automatically to eliminate duplicate efforts. b. Marketing automation—Triggering marketing efforts when, for example, new contacts or prospects are entered into the database, such as sending promotional material via email. Also facilitates targeted marketing campaigns to specific customer interests (e.g., Kroger promoting grocery products only to interested customers). c. Customer service automation—Handling common customer interactions in an automated manner. For example, Internet service providers often have automated, prerecorded troubleshooting for common internet or modem issues. d. In addition to automated features, a CRM system provides additional value with the rich data that can be assessed, such as sales history and projections, marketing campaign success, trends, and performance indicators. Electronic Data Interchange (EDI)—EDI is computer-to-computer exchange of business data (e.g., purchase orders, confirmations, invoices, etc.) in structured formats allowing direct processing of the data by the receiving system; EDI reduces handling costs and speeds transaction processing compared to traditional paper-based processing. EDI costs include the following: a. Costs of change—Costs associated with locating new business partners who support EDI processing; legal costs associated with modifying and negotiating trading contracts with new and existing business partners and with the communications provider; costs of changing internal policies and procedures to support the new processing model (process reengineering) and employee training. b. Hardware costs—Often additional hardware such as communications equipment, improved servers, etc., is required. c. Costs of translation software d. Cost of data transmission e. Costs of security, audit, and control procedures Electronic Funds Transfer (EFT)—A technology for transferring money from one bank account directly to another without the use of paper money or checks; EFT substantially reduces the time and expense required to process checks and credit transactions. 1. Typical examples of EFT services include the following: a. Retail payments—Such as credit cards, often initiated from POS terminals b. Direct deposit—Of payroll payments directly into the employee's bank account c. Automated teller machine (ATM) transactions d. Nonconsumer check collection—Through the Federal Reserve wire transfer system 2. EFT service—Typically provided by a third-party vendor that acts as the intermediary between the company and the banking system:Transactions are processed by the bank through the Automated Clearing House (ACH) network, the secure transfer system that connects all U.S. financial institutions. 3. EFT security—Provided through various types of data encryption as transaction information is transferred from the client to the payment server, from the merchant to the payment server, and between the client and merchant. 4. Token-based payment systems—Such as electronic cash, smart cards (cash cards), and online payment systems (e.g., PayPal) behave similarly to EFT, but are governed by a different set of rules. a. Token-based payment systems can offer anonymity since the cards do not have to be directly connected to a named user. 5. Electronic wallets—These are not payment systems, but are simply programs that allow the user to manage his or her existing credit cards, usernames, passwords, and address information in an easy-to-use, centralized location. D. Supply Chain Management (SCM)—The process of planning, implementing, and controlling the operations of the supply chain: the process of transforming raw materials into a finished product and delivering that product to the consumer. Supply chain management incorporates all activities from the purchase and storage of raw materials, through the production process into finished goods through to the point-of-consumption. value-added network(VAN) is a hosted service offering that acts as an intermediary between business partners sharing standards based or proprietary data via shared business processes. The offered service is referred to as "value-added network services".

Logical Access Controls

A. Logical Access Control—Most large organizations use logical access control software to manage access control. Functions of such systems include managing user profiles, assigning identifications and authentication procedures, logging system and user activities, establishing tables to set access privileges to match users to system resources and applications, and, log and event reporting. Hence, many of the functions and procedures described in this lesson are managed through logical access control software. B. Common Levels of Logical Access Control 1. Read or copy 2. Write, create, update or delete 3. Execute (run) commands or procedures 4. A combination of the above privileges User Authentication—The first step in controlling logical access to data is to establish user identification. This is an area of growth and change in IT (see the lesson on "Emerging Technologies in AIS") normally accomplished by creating a username for each authorized user and associating the username with a unique identifier. Security tokens—Include devices that provide "one-time" passwords that must be input by the user as well as "smart cards" that contain additional user identification information and must be read by an input device. Smart cards and identification badges—Have identification information embedded in a magnetic strip on the card and require the use of additional hardware (a card reader) to read the data into the system. Depending on the system, the user may only need to swipe the card to log onto the system or may need to enter additional information. Biometric controls a. A physical or behavioral characteristic permits access. b. Possible physical biometric identifiers include palms, hand geometry, iris or retina scans, fingerprints, and facial recognition. c. Behavior-based biometric identifiers include signature and voice recognition. d. Advantages: Costs for many systems have dropped dramatically and many recognition devices are small. e. Disadvantages: Lack of uniqueness, unreliability Authorization Controls—The authorization matrix or access control list (ACL) specifies each user's access rights to programs, data entry screens, and reports. The authorization matrix contains a row for each user and columns for each resource available on the system. Firewalls—The purpose of a firewall is to allow legitimate users to use, and to block hackers and others from accessing, system resources. It consists of hardware, or software, or both, that helps detect security problems and enforce security policies on a networked system. A firewall is like a door with a lock for a computer system. It inspects, and when necessary filters, data flows. There are multiple types, and levels, of firewalls: A. Network 1. Filters data packets based on header information (source and destination IP addresses and communication port) 2. Blocks noncompliant transmissions based on rules in an access control list 3. Very fast (examines headers only) 4. Forwards approved packets to application firewall B. Application 1. Inspects data packet contents 2. Can perform deep packet inspection (detailed packet examination) 3. Controls file and data availability to specific applications C. Personal—Enables end users to block unwanted network traffic IV. Additional Logical Access Control Considerations A. Remote Access Security—With the spread of ubiquitous computing, managing remote access is a critical security consideration in organizations. See the "Mobile Device, End-User, and Small Business Computing" lesson for discussion of policies and procedures related to remote access security. B. Intrusion Detection Systems (IDSs)—Monitor network systems for unusual traffic. C. A Honeypot or Honeynet is a server that lures hackers to a decoy system. Its purpose is to identify intruders and provide information to help block them from live systems. D. IDSs, honeypots, and honeynets complement, and are usually integrated with, firewalls.

Production Cycle

Core Activities A. Forecast Sales/Plan Production/Product Design/Authorize Production 1. Manufacturing resource planning (MRP) and just-in-time (JIT) manufacturing systems help forecast materials needs and plan production 2. The master production schedule (MPS) specifies how much of each product to produce and when to produce it.A production order authorizes manufacturing. B. Move Raw Materials into Production 1. Materials requisition authorizes moving materials from storeroom to production. C. Production Process 1. Receive raw materials into production. 2. Production processes vary greatly by product and level of technology: a. Computer-integrated manufacturing (CIM)—The use of IT to fully or partially automate the production process can dramatically change the production process. b. Many production processes are now partially or fully integrated with robotics (i.e., the use of robots to execute production tasks). 3. Manage cost accounting processes. a. See the "Manufacturing Costs" lesson to review cost accounting for manufacturing costs D. Complete Production Process and Deliver Goods Inherent Risks A. Inventory Manipulations 1. Many, many frauds include inflated or falsified inventory values. For example, the Phar-Mor Drugs fraud was built on overstating inventory values and generating falsified profits as a result. Phar-Mor sustained the fraud because auditors told them, in advance, at which stores they would inspect inventory. Phar-Mor then made sure that the audited stores (and only the audited stores) had sufficient inventory. B. Inventory Mark-Downs 1. Has obsolete inventory been written down to its true (reduced) value? Do our monitoring processes include periodic reviews to assess the value of inventory? Relevant Stakeholders A. Production Workers—Make the stuff that we sell. 1. Please see the "HR and Payroll Cycle" lesson. B. Raw Materials Suppliers 1. As in the expenditure cycle, establishing that suppliers exist and that their bills are legitimate and accurate are important concerns. Important Forms (in Electronic Systems), Documents (in Paper Systems), and Files A. See the table at the end of this lesson. V. Accounting and Business Risks and Controls—Inventory A. Manage inventory levels to avoid stock-outs (lost sales) and too much inventory (carrying costs). 1. Increasing use of just-in-time methods (see the "Inventory Management" lesson). 2. Accurate inventory control and sales forecasting systems; online, real-time inventory systems; periodic physical counts of inventory; and regular review of sales forecasts to make adjustments. Use of a perpetual inventory system; use of techniques such as just-in-time, economic order quantity, and reorder points as methods of managing inventory:heavy reliance on technology to determine when and how much to order. B. Inventory Theft or Destruction—Controls include: 1. Physically securing inventory and restricting access; may include security staff and electronic surveillance 2. Documentation of inventory transfers 3. Releasing inventory for shipping only with approved sales orders 4. Accountability—Employees handling inventory should sign the documents or enter their codes online to ensure accountability. 5. Wireless communication and RFID tags provide real-time tracking. 6. Periodic counts of inventory, increasingly by RFID tags (a near-continuous control) 7. Insurance (self-insurance or third-party) Accounting and Business Risks and Controls—Fixed Assets A. Major asset acquisitions are approved, as appropriate, and supplied by approved vendors. B. Fixed assets exist, are properly valued, and haven't been stolen. C. Written policies for capitalization versus expensing decisions exist and are followed and monitored. D. Under- or Over-Investment in Fixed Assets E. Fixed Asset Theft or Destruction

Revenue Cycle

Core Activities of the Revenue Cycle A. Sales 1. Receive customer orders (see the "Emerging Technologies in AIS" lesson related to emerging payment systems). 2. Approve customer credit/sales authorization. B. Physical (or Virtual) Custody of Products or Services 1. Fill the order and prepare for shipping (if a physical merchandise). 2. Ship or deliver the product. C. Accounts Receivables 1. Bill (if needed). 2. Manage receivables (e.g., returns and allowances, determine collectability of accounts). D. Cash 1. Collection and receipt of payments. 2. Reconciliations. Inherent Risks of the Revenue Cycle A. Revenue Recognition—Is that revenue real? 1. To be recognized in the financial statements, revenue must be realized or realizable and earned. 2. Many revenue cycle frauds concern booking faked revenue or booking revenue prematurely. In a typical fraud, a company will prematurely record revenue (e.g., the MiniScribe Corp. fraud). 3. Another revenue cycle fraud is the creation of fictitious customers who "buy" things (hence the goods are "delivered") but never pay for them. B. Are Those Receivables Collectible? 1. Estimating the collectibility of receivables is subjective. Where there is subjectivity, management may be inclined to err on the side that is in its self-interest. 2. An important revenue cycle fraud is "sales" to fictitious customers where the receivables are then written off. C. Customer Returns and Allowances 1. Estimating returns and allowances, particularly for new products or technologies, requires that management make subjective estimates. Relevant Stakeholders A. Customers—Who, obviously, buy products or services 1. Establishing that customers exist and are granted credit appropriately are important controls in the revenue cycle B. Outbound Logistics Providers—Ship goods to customers. Important Revenue Cycle Forms (in Electronic Systems), Documents (in Paper Systems), and Files A. See the table at the end of this lesson. B. Vocabulary Word—Lading 1. Archaic noun—The action or process of loading a ship or other vessel with cargo 2. Used in a sentence—"After much sweat and toil, and Bill dropping a crate on his foot, the lading of the cargo ship was complete, and it set sail for New Jersey." 3. Why you should learn this word a. Imagine sweaty, heavyset men loading a cargo of freshly cut green bananas onto a ship in St. Lucia, in the Caribbean, among swaying palm trees. Picture this scene, and you'll also remember that the foreman of these men must have a bill of lading (see the table at the end of this lesson for definition). Cash and Cash Receipt Issues—Theft of Cash. This is not good. Controls include: 1. Segregation of duties a. For example, two people opening emails/mail together b. Remittance data is sent to the accounts receivable clerk while cash and checks are sent to the cashier. c. Prompt documentation and restrictive endorsements of remittances d. Reconcile total credits to accounts receivable total debits to cash. e. Send a copy of the remittance list to independent party (internal auditing) to compare with validated deposit slips and bank statements. f. Send monthly statements to customers. g. Cash registers automatically record cash received h. Make a daily bank deposit of all remittances. i. Perform independent bank reconciliations. A picking ticket identifies the items to be pulled for a sales order.

Encryption and Secure Exchanges

Privacy and Security Issues in Networked Systems A. Encryption—This is the process of converting a plaintext message into a secure-coded form (ciphertext). 1. Decryption reverses encryption (to read a message). Encryption technology uses a mathematical algorithm to translate cleartext (plaintext)—text that can be read and understood—into ciphertext (text that has been mathematically scrambled so that its meaning cannot be determined without decryption). 2. Encryption can provide privacy (protection of data against unauthorized access) and authentication (user identification). It can protect stored (i.e., data at rest) or transmitted (i.e., data in motion) data and verify the data authenticity. 3. Encryption is an essential but imperfect control. The security of encryption methods rests upon key length and secrecy. Generally, key security declines with use. B. "Key" Elements of Encryption 1. The encryption algorithm is the function or formula that encrypts and decrypts (by reversal) the data. 2. The encryption key is the parameter or input into the encryption algorithm that makes the encryption unique. The reader must have the key to decrypt the ciphertext. 3. Key length is a determinant of strength. Longer keys are harder to decrypt. C. Symmetric Encryption 1. Fast, simple, easy and less secure than asymmetric encryption 2. More often used in data stores (i.e., data at rest) since only one party then needs the single algorithm and key 3. Also called single-key encryption, symmetric encryption uses a single algorithm to encrypt and decrypt. 4. The sender uses the encryption algorithm to create the ciphertext and sends the encrypted text to the recipient. 5. The sender informs the recipient of the algorithm. 6. The recipient reverses the algorithm to decrypt. D. Asymmetric Encryption 1. Safer but more complicated than symmetric encryption 2. More often used with data-in-motion 3. Also called public/private-key encryption 4. Uses two paired encryption algorithms to encrypt and decrypt 5. If the public key is used to encrypt, the private key must be used to decrypt; conversely, if the private key is used to encrypt, the public key must be used to decrypt. 6. To acquire a public/private key pair, the user applies to a certificate authority (CA): a. The CA registers the public key on its server and sends the private key to the user. b. When someone wants to communicate securely with the user, he or she accesses the public key from the CA server, encrypts the message, and sends it to the user. c. The user then uses the private key to decrypt the message. d. The transmission is secure because only the private key can decrypt the message and only the user has access to the private key. E. Quantum Encryption—Quantum mechanics from physics is emerging as a technology that may revolutionize computing encryption. It uses the physical properties of light (photons) to generate seemingly uncrackable codes. III. Facilitating Secure Exchanges—E-commerce should occur only with high certainty regarding the identity of the trading partners and the reliability of the transaction data. Electronic identification methodologies and secure transmission technology are designed to provide such an environment. A. Digital Signatures 1. An electronic means of identifying a person or entity 2. Use public/private key pair technology to provide authentication of the sender and verification of the content of the message. 3. The authentication process is based on the private key. 4. Vulnerable to man-in-the-middle attacks in which the sender's private and public key are faked. A digital signature is like the sealing of an envelope with the King's personal wax seal in the days of Kings. A thief may steal, or a forger may duplicate, the King's seal. Therefore, the message from the King, or the e-mail—in the case of digital signatures—may not be from the person who the receiver thinks that it is from, because a thief stole the seal (or the private key). B. Digital Certificates 1. For transactions requiring a high degree of assurance, a digital certificate provides legally recognized electronic identification of the sender, and, verifies the integrity of the message content. a. Based on a public key infrastructure (PKI) which specifies protocols for managing and distributing cryptographic keys b. In this system, a user requests a certificate from the certificate authority. The certificate author then completes a background check to verify identity before issuing the certificate. c. More secure than digital signatures 2. A certificate authority or certification authority (CA) manages and issues digital certificates and public keys. The digital certificate certifies the ownership of a public key by the named subject (user) of the certificate. This allows others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the certified public key. C. Secure Internet Transmission Protocols 1. Sensitive data sent via the Internet is usually secured by one of two encryption protocols: 1. Secure Sockets Layer (SSL) uses a combination of encryption schemes based on a PKI; 2. Secure Hypertext Transfer Protocol (S-HTTP) directs messages to secure ports using SSL-like cryptography. 2. Secure Electronic Transactions (SET) a. Developed by VISA and MasterCard. A protocol often used for consumer purchases made via the Internet. Uses multiple encryption schemes based on a PKI. b. Used by the merchant (i.e., the intermediary between the bank and customer) to securely transmit payment information and authenticate trading partner identity. 3. Virtual Private Network (VPN)—A secure way to connect to a private local area network (LAN) from a remote location, usually through an Internet connection. Uses authentication to identify users and encryption to prevent unauthorized users from intercepting data. Should be part of an organization's remote access security plan. When a digital certificate is requested, an independent background check is completed to confirm the identity of the requesting entity. Thus, a digital certificate provides a higher level of reliability than a digital signature.

Physical Access Controls

T facility controls include computer hardware (CPUs, disk and tape drives, printers, communications devices, etc.), software (program files), and data files, as well as the computing infrastructure (network communication media and devices) and the rooms and buildings in which the software, hardware, and files reside. Control over IT facilities involves safeguarding equipment, programs, and data from physical damage. A. Possible threats to physical security include fired or disgruntled employees, employees or individuals with substance addictions (e.g., seeking assets to sell or buy drugs), and employees or individuals with severe financial or emotional problems. Remember the Fraud Triangle indicators of risk that are discussed in the Auditing and Attestation section of CPAExcel®. Physical Location Controls—Computer operations should be located in facilities that are safe from fire, flood, climate variations (heat, humidity), and unauthorized access. Generally, computer operations are located away from windows (in a building's interior), not on a top floor or basement, and not on a first floor. A. The computer room should be climate controlled to control and monitor heat and humidity, which can damage equipment and data files. Generally, there should be no opening windows in IT facilities. In addition, the heat generated by large banks of servers must be considered and controlled, and is sometimes redirected to heat buildings. B. The system for detecting environmental threats should include alarm control panels, water and smoke detectors, and manual fire alarms. C. Fire suppression systems appropriate for electrical fires should be installed. Handheld fire extinguishers should also be available. Although they were once popular, Halon systems are now banned due to environmental concerns. D. Adequacy of power and backup power systems should be periodically evaluated. Potential electrical system risks include failure (blackout), reduced voltage (brownout), sags, spikes and surges, and electromagnetic interference (EMI). E. All physical protection and recovery systems require maintenance and monitoring. Regular inspection by fire departments are both a best practice, and in many cases, required by local, regional, or state laws. III. Physical Access A. Access to the Computer Operations Areas 1. Should be restricted to those directly involved in operations. In larger organizations, authorized individuals should be issued, and required to wear, identification badges. 2. Physical security should include mechanical or electronic door locks, keypad devices, access card readers, security personnel, and surveillance devices. 3. Social engineering—Methods used by attackers to fool employees into giving the attackers access to information resources. One form of social engineering concerns physical system access: a. In piggybacking an unauthorized user slips into a restricted area with an authorized user, following the authorized user's entry. B. Data Stored on Magnetic Disks, Tapes, and USB Drives—Should be protected by a combination of logical and physical controls, including: 1. External labels—Used by computer operators, or by smart end users, to visually (physically) identify the disks or USB drives. 2. Internal labels (sometimes called "header" and "trailer" records, in reference to their origins in magnetic tape)—Read by the processing program to determine the identity of the data file. 3. File protection rings or locks—Physically prevent the media from being overwritten. 4. Setting file attributes—Logically restrict the ability of the user to read, write, update, and/or delete records in a file. C. Programs and Data Files—Should be physically secured and under the control of a file librarian in order to protect the programs and data from unauthorized modification. See also the "Program Library, Documentation, and Record Management" lesson. IT facility controls are general controls. That is, they are controls over the IT department as a whole. For example, restricting access to the IT department prevents unauthorized individuals from gaining physical access to the system.

Big Data

What Is Big Data? (Sales data is not big data) Another name for big data is "smart data," A. The creation, analysis, storage, and dissemination of extremely large data sets. Such data sets have recently become feasible due to advances in computer storage technologies (e.g., the cloud), advanced data analytics, and massive computing power. B. Gartner (an IT research and consulting firm) definition: "high volume, velocity, and/or variety information assets that demand new, innovative forms of processing for enhanced decision making, business insights or process optimization." 1. Ubiquitous computing (i.e., smartphones and wearables, e.g., the Fitbit), the Internet of Things, and advances in biometrics (i.e., automated human recognition) are all sources for big data. C. Governance—Organizations must establish a clear governance structure for big data projects. 1. They must consider the responsibilities, scope, and limits of their big data projects. 2. Big data projects require a clear purpose, scope, and plan. 3. Organizations must consider the qualitative characteristics of information in formulating big data plans see the information criteria section of the "The COBIT Model of IT Governance and Management" lesson for a listing of the qualitative characteristics of information F. Big Data Sources 1. Organizational "dark data"—Data collected from business activities that may be reused in analytics, business relationships, or directly monetized (sold). Part of the reason "dark data" may be unused is because it lacks "meta data," i.e. "data about data" which explains what the "dark data" is. For example, many companies generate lots of (unused) data about their networks (e.g., who is using the networks, for how long, and for what reason) but many companies fail to use this data to understand how to better serve their customers and employees. When one identifies the data as "data about customer needs and uses" instead of "automated data generated by our network systems" this "dark" data is more likely to be reused. 2. Operational and social media data II. Value and Risks of Big Data A. Value—Uses include marketing and sales, operational and financial performance, risk and compliance management, product and service innovation, improving customer experiences and loyalty, data monetization (sales) B. Risks 1. Privacy 2. Legal issues and prohibited uses (e.g., of medical data and HIPPA) 3. Technology and structure—Where and how will be store and protect it? C. Example Projects 1. Xerox used big data to reduce attrition at call centers by ~ 20%. 2. The Weather Company and IBM partnered to better manage weather forecasts to improve business performance. The project integrates data from over 100,000 sensors and aircraft and millions of smartphones, buildings, and moving vehicles. 3. Kroger integrates customer data to enable it to improve its product offerings and target offerings to specific sets of customers. As a result, Kroger's is the #1 rated loyalty card program in the industry. Look at this section to see the different big data volume.

Simple Linear Regressions in Business Analytics

What is regression? i. The word "regression" is not descriptive and results from a historical misnaming of this procedure; a better phrase for what is meant as regression is "fitting a line through data points." ii. Types of regression 1. A time-series regression predicts outcomes that occur over time (e.g., monthly, quarterly, or yearly); for example, predicting monthly sales (y) based on advertising expenditures for the previous month (x). 2. A cross-sectional regression predicts outcomes at one point in time; for example, predicting monthly sales for a retail chain (y) based on the stores' square footage (x). iii. Numbers predicting numbers: Regression uses numeric predictors to predict numeric outcomes. 1. There are variations on regression for categorical predictor variables (e.g., which client purchased the most products?) and categorical outcomes (e.g., is the account receivable balance late or not?). You don't need to know these for the CPA exam. iv. Correlation and causality—Regression can be used to establish that a relationship exists between a predictor variable (x) and an outcome variable (y). 1. The existence of a relationship between x and y doesn't indicate that x causes y. 2. For example, a regression of the Internet Explorer browser market share (x) and murders in the United States (y) (2006-2011 data) indicates a strong relationship (source: https://www.buzzfeednews.com/article/kjh2110/the-10-most-bizarre-correlations). Obviously, it would be ridiculous to claim that the growth in the Internet Explorer browser market share caused the rise in U.S. murders. Advantages—Regression is a formal, rigorous statistical method. It provides precise, quantitative predictions. Disadvantages—Regression requires data, which may not exist. In addition, regression describes the strength of a relationship between two or more variables but doesn't provide insight into the causal relationship of those variables. Review of Basic Regression Equation i. The basic regression equation is y = A + B x Where y = dependent (or outcome) variable A = the y-intercept (i.e., where the regression line crosses the X axis) B = the slope of the line (i.e., the amount of change in Y for every unit change in X) x = the predictor variable ii. Is the regression model a good fit? 1. The coefficient of determination—that is, r2 (pronounced "R squared")—is the degree to which the independent variable predicts the dependent variable. a. The coefficient of determination is calculated by squaring the correlation coefficient where 0 ≤ r2 ≤ 1. 2. As r2 approaches 1, x perfectly predicts y. As r2 approaches 0, there is no relationship between x and y. Although it depends on the context, an r2 greater than 0.1 is often considered significant while an r2 greater than 0.5 is often considered large. 3. p values also indicate the strength of a regression relationship. When the p value is small (e.g., < 0.05), the relationship between x and y is significant. When the p value is large (e.g., > 0.1) the relationship between x and y is not significant. 4. t values are closely related to p values but move in the opposite direction. Hence, a large t value (and a small p value) indicate a relationship between x and y, while a small t value (and a large p value) indicate a weak or no relationship between x and y. 1. Regression A statistical method for predicting a numeric outcome (y) using a numeric predictor (x). 2. Time-series regression A regression that predicts a variable over time (e.g., monthly, quarterly, or yearly). 3. Cross-sectional regression A regression that predicts a variable at one point in time, e.g., sales (y) for a company for one year based on store square footage (x). 4. r2 Coefficient of determination and the square of the correlation coefficient. A measure of how well x predicts y in a regression. 1 = a perfect relationship, 0 = no relationship. 5. p A measure of the statistical significance of a regression. p < .05 indicates a significant relationship, p > .1 indicates little or no relationship. The regression line crosses the y axis at 0; hence the value of A (the intercept) is 0, the value of B (i.e., the slope) is positive (i.e., the line is increasing), and the p value will be small since the line appears to fit the observations well.

Data Analytics

I. What Is Business Analytics? What Is Business Intelligence? A. Business analytics is "the science and art of discovering and analyzing patterns, identifying anomalies, and extracting other useful information in data" for application to a business issue or problem. (AICPA 2018) 1. Business analytics relies on advanced statistical and mathematical tools and models including visualization, regression, cluster analysis, network analysis, and machine learning techniques. 2. Examples of business analytics include audit data analytics (ADA), risk analysis and fraud detection, and the analysis of key performance indicators (KPIs). 3. Business analytics extends traditional analytical procedures to analyze new (often much larger) data sets and to apply advanced technologies and models to accounting and auditing problems. B. Business Intelligence 1. According to Gartner (2018), an IT consulting company, "Business Intelligence (BI) ... includes the applications, infrastructure and tools, and best practices, that enable access to and analysis of information to improve and optimize decisions and performance." 2. The term "business intelligence" is perhaps most closely linked to a specific software product, Microsoft Power BI, which enables users to structure data for the creation of dashboards, work sheets, and stories. Tableau is a very popular competing product. Both are excellent products. II. Business Analytics Skills, Processes, and Strategies A. Valued Data Analytics Skills and Tools. These include: 1. An "analytics" mind-set—that is, the ability to think critically and use information to exercise informed professional judgments. a. Ask good questions, transform data, apply analytic methods, communicate with stakeholders. 2. Principles of data cleaning, structuring, and information display. 3. Knowledge of data visualization and business analytics software. 4. Ability to use legacy (i.e., mature and declining use) tools, such as Excel and Access, to transition data for analysis using BI tools. B. Data Preparation and Cleaning—The ETL Process 1. Preparing data for analysis is often summarized as the extract, transform, and load (ETL) process. This consists of: a. Extract—get the data from a source. This could be simple, such as opening an Excel file, or complicated, such as writing Python code to "scrape" (pull) data from a website. b. Transform—apply rules, functions (e.g., sort) and cleansing operations to a data set. For example, this might include removing duplicate records and fixing errors in names and addresses in a pension system database. Excellent software exists for such tasks (e.g., Alteryx). c. Load—move the data to the target system. This can be as simple as uploading a flat file or as complicated as writing code to upload an exabyte (i.e., extremely large) data set to Hadoop (a software platform for extremely large data and analyses).Increasingly, the ETL process is automated. Automation often increases the accuracy of the process and the data. With many big data streams, automated ETL is the only feasible solution. C. Data Reliability 1. Business analytics must be based on reliable (i.e., accurate, true, and fair) data. Influences on data reliability include the nature and source of the data and the process by which it is produced: a. Nature of the data—quantitative data are likely to be more precise but less descriptive than are data that are stated in words. b. Source of the data—general ledger data from the client's ERP system is likely to be more reliable than is data on insider trading of stock futures. c. Process used to produce the data—data from an audited accounting system is likely to be more reliable than is data from an unreliable accounting system. Four Categories of Business Analytics A. Business analytics can be categorized by four questions: what is happening, why is it happening, what is likely to happen, and how should we act in response? 1. What is happening? This is called descriptive analytics. 2. Why did it happen? This is called diagnostic analytics. 3. What is likely to happen? This is called predictive analytics. 4. How should we act? This is called prescriptive analytics.

Data Visualization

What Is Data Visualization? A. Visualization is the art and science of designing information for visual perception (e.g., in graphs, charts, and icons). B. Human perception (as opposed to, e.g., the perception of bats and dogs) is designed to better perceive sights compared with sounds (preferred by bats and dolphins) and smells (preferred by dogs). In addition, well-designed visual displays communicate numbers better than tables of numbers or words describing the numbers. Basic Principles of Data Visualization A. Color is an essential element of designing visualizations. 1. The color wheel (see Exhibit 4) offers a way to combine colors to create effects and emotions. For example, one may choose colors that are opposite one another, in a triad in the wheel, or in opposite quadrants in the wheel. 2. Color psychology is important to consider. Darker colors convey seriousness, while lighter colors convey friendliness. 3. Use of a contrasting color can highlight something (e.g., a single data value). 4. Alternatively, use of shades (i.e., gradients) of a color (e.g., colors from within the blue segment of the wheel) can highlight differences without offering a strong contrast. 5. Generally, one should use four or fewer colors in visualizations. Too many colors results in a loss of meaning. Basic Visual Analytics—Which graph to use when? 1. Bar charts are for comparing data across categories 2. Line charts show changes in a variable, often over time. 3. Stacked Bar Chart which can be used to illustrate parts of a whole and changes over time. 4. Pie charts, used alone, are generally a poor choice for an information display. 5. Scatter plots show the relationship between two variables. 6. Dashboards- A dashboard is a collection of views of data that allows a user to understand, and often monitor, a process. a. Typical examples of business dashboards are designed to monitor sales and revenue, accounts receivable balances, cash, human resource productivity and absenteeism, and loan and interest rates. Example: San Combe Products Case, Dan Stone Company and Research Question San Combe Products is a global producer and distributor of electronic components, software, and online products in Europe and South America. San Combe is reviewing the relationship between sales and profits as a part of an internal company analysis. The purpose of this analysis is to determine if (1) there are unexpected changes in sales or profits companywide, over time, and by region and (2) there is a stable relationship between sales and profits companywide, over time, and by region. You have 11 quarters of sales and profit data in an Excel file (from 2015 to 2017). You don't have final-quarter data for 2017. You have omitted pre-2015 data since the company went through a major merger in late 2014 that changed the nature and scope of its businesses. Using nonzero axes in bar charts visually distorts the size of differences.

Computer Networks and Data Communication

A computer network consists of two or more computing devices connected by a communications channel to exchange data. The communications channel will use specific devices, and a set of protocols (i.e., standards), to exchange data and share resources among devices. Uses—What Do Networks Do? Common uses and applications of networks include: A. File services Enables user sharing of files B. Email services Link to email systems C. Print services Enable connections to printers D. Remote access capabilities Enable connections to remote devices (e.g., a special purpose computer such as a supercomputer) E. Directory services Identifies what is available on a network F. Network management Functions to control and maintain the network including information about network traffic, access attempts, and errors. G. Internet connection services including: 1. Dynamic host configuration protocol (DHCP) A protocol used to obtain and release IP addresses. This protocol enables computers to connect to an IP-based network such as the Internet. 2. Domain name system (DNS) The system that translates network nodes into IP addresses Components of a Network A. Nodes Any device connected to the network is a node: 1. Client A node, usually a microcomputer, used by end users; a client uses network resources but does not usually supply resources to the network. May be "fat" or "thin" (see the lesson on "Information Systems Hardware"). 2. Server A node dedicated to providing services or resources to the rest of the network (e.g., a file server maintains centralized application and data files, a print server provides access to high-quality printers, etc.); servers are indirectly, not directly, used by end users. 1. Wired communications media a. Copper or twisted pair i. Traditionally used for phone connections ii. The slowest, least secure (e.g. easy to tap) and most subject to interference of the wired media iii. Recent modifications have, however, improved performance significantly iv. Least expensive media v. Performance degrades with cable length b. Coaxial cable Similar to the cable used for television, coaxial cable is faster, more secure, and less subject to interference than twisted pair but has a slightly higher cost. c. Fiber optic cable Extremely fast and secure, fiber optic cable communications are based on light pulses instead of electrical impulses; therefore they are not subject to electrical interference and the signal does not degrade over long distances; more expensive to purchase and to install. Network operating system Controls communication over the network and access to network resources: 1. Peer-to-peer systems All nodes share in communications management; no central controller (server) is required; these systems are relatively simple and inexpensive to implement; used by LANs. 2. Client/server systems A central machine (the server) presides as the mediator of communication on the network and grants access to network resources; client machines are users of network resources but also perform data processing functions; used by LANs and by the world's largest client—the internet. 3. Hierarchical operating systems a. Use a centralized control point generally referred to as the host computer b. The host not only manages communications and access to resources but also often performs most of the data processing c. Nodes connected to these systems often function as dumb terminals which are able to send and receive information but do not actually process the data d. Used by WANs Types of Networks Networks are characterized by the way that the network is constructed and managed, and by the network's geographic reach. A. Local area networks (LANs) Local area networks were so named because they were originally confined to very limited geographic areas (a floor of a building, a building, or possibly several buildings in very close proximity to each other). With the advent of relatively inexpensive fiber optic cable, Local area networks can extend for many miles. For example, many urban school districts have Local Area Networks connecting all of the schools in the district. B. Wide area networks (WANs) Although WANs can vary dramatically in geographic area, most are national or international in scope. C. Storage area networks (SANs) A type of, or variation of, LANs that connect storage devices to servers. D. Personal area network (PANs) A PAN is a short-range network (approximately 30 feet or 10 meters) that often connects a single device (e.g., headphones) to a network. The most common use of PANs is to connect devices using "Bluetooth" technology (which is a communications protocol). Walk through any airport and you'll see lots of people using PANs (via Bluetooth). V. Network Management Tools The following tools and reports help manage an organization's networks: A. Response Time Reports How fast is the system responding to user commands? Generated reports should indicate average, best, and worst time responses across the system (e.g., by lines or systems). B. Downtime reports Which portions of the system were unavailable and for how long? C. Online monitors These monitors check transmission accuracy and errors. D. Network monitors These monitors display network node status and availability. E. Protocol analyzers Diagnostic tools that monitor packet flows across the system. They are useful in diagnosing the volume and nature of network traffic, and in analyzing network performance. F. Simple Network Management Protocol (SNMP) A software-based, TCP/IP protocol that monitors network performance and security. G. Help desk reports Useful in identifying network problems raised by users.

Introduction to E-Business and E-Commerce

Business-to-Business (B2B) E-Commerce—Involves electronic processing of transactions between businesses and includes electronic data interchange (EDI), supply chain management (SCM), and electronic funds transfer (EFT). Business-to-Consumer (B2C) E-Commerce—Involves selling goods and services directly to consumers, almost always using the Internet and web-based technology. B2C e-commerce relies heavily on intermediaries or brokers to facilitate the sales transaction. Business-to-Employee (B2E)—Involves the use of web-based technology to share information with, and interact, with an organization's employees, e.g., through portals and intranets. Business-to-Government (B2G)—Involves the growing use of web-based technologies to provide, and support, governmental units (e.g., providing property tax data online, paying parking tickets online, online contract bidding). E-Commerce—Requisites and Risks A. Requisites—E-commerce depends on trusting two parties: 1. Your trading partners—Are they honest and are they whom they represent they are? To this point, there is a New Yorker magazine cartoon in which two dogs are talking, while one of the dogs is seated at a computer. One dog says to the other, "On the Internet, no one knows that you are a dog." 2. The site or service provider—One of the biggest challenges to eBay becoming a global business, for example, was their remarkable ability to convince strangers to trust one another, and, to trust eBay with their personal financial information. B. Risks of E-Commerce 1. System availability—Online systems must be stable and available. This was an early challenge to eBay. In its early days, it asked users to stay off of the system during peak hours! 2. Security and confidentiality—Data breaches—for example, the 2013 Target credit and debit card breach—can irreparably harm trust in systems and companies. 3. Authentication—Is an online person or company who they say they are? Increasingly, e-commerce sites (e.g., Upwork) include verification of identity as a prerequisite to site use. 4. Nonrepudiation—This is, essentially, the existence of an audit trail that renders actions verifiable. Hence, one cannot deny, after a transaction, one's role in it. 5. Integrity—Is the system secure from hackers and crackers? Creating a system that is immune to hacks is a formidable undertaking. Even the FBI website has been hacked. C. Risks of not Implementing E-Commerce 1. Your customers find it cheaper and easier to buy online. 2. Limited growth—E-commerce offers global reach. 3. Limited markets—E-commerce turns what once were small, highly specialized markets (e.g., collecting antique fountain pens) into large, worldwide markets. III. E-Commerce Models—How do companies make money using e-commerce and e-business technologies? Typical e-commerce models include: A. Electronic Marketplaces and Exchanges—These marketplaces bring together buyers and sellers of goods who connect virtually rather than physically to one another. The most common example is probably eBay, but many special-interest marketplaces focus on specific industries, for example buyers and sellers in the chemical industry. B. Viral Marketing—Organizations increasingly attempt to increase brand awareness or generate sales by inducing people to send messages to friends using social networking applications. For example, users of Facebook are familiar with the icon that allows Facebook users to post articles and advertisements on their Facebook pages. C. Online Direct Marketing—Many companies now have large online presences to sell directly to consumers or other businesses. Examples include Amazon, a pioneer in direct online sales, and Walmart, whose virtual presence is an important part of the company's business strategy. D. Electronic Tendering Systems—These tendering or bidding systems allow companies to seek bids for products or services that the organizations wish to purchase. General Electric pioneered tendering systems. 1. Also called "e-procurement systems." E. Social Networking/Social Computing—Is concerned with how people use information systems to connect with others. Examples include instant messaging, wikis, Facebook, Twitter, and other examples that purchasers of this course are likely familiar with. Social networks are social structures composed of nodes, representing individuals or social network resources, that link to one another. Social network service examples include Facebook, YouTube, Flickr, and many other emerging sites.

Computer Crime, Attack Methods, and Cyber-Incident Response

Computer Crimes—Computer crimes involve the use of the computer to break the law including gaining unlawful access to computer resources. Computer crime categories include: A. Computer or System as Target—A perpetrator may use a computer to deny others the use or services of a computer system or network. Examples include denial of service (DoS) attacks and hacking. B. Computer as Subject—A perpetrator may unlawfully gain access to many computers on a network and use these computers to perpetrate attacks on other computers. Examples include a distributed denial of service attack (just described) or the use of malware (i.e., programs that exploit system and user vulnerabilities) to gain access to computers and networks. C. Computer as Tool—A perpetrator unlawfully uses a computer or network to gain access to data or resources (other than the computer itself). Examples include fraud, unauthorized access breaches, phishing, and installing key loggers. D. Computer as Symbol/User as Target—A variation on the computer-as-tool crime. Here, a perpetrator deceives a user to obtain access to confidential information. Examples—social engineering methods including phishing, fake website, and spam mail. Who Commits Cyber-Crimes? The categories of computer criminals include: A. Nation-States and Spies—Some foreign nations (e.g., China, Russia?) seek intellectual property and trade secrets for military and competitive advantage. B. Industrial Spies—Organizations seek intellectual property and trade secrets for competitive advantage. C. Organized Criminals—Organized crime engages in computer crime for profit (e.g., blackmails that threaten to harm data resources).. D. Hacktivists—Those who make social or political statements by stealing or publishing an organization's sensitive information (e.g., WikiLeaks). E. Hackers—Some individuals commit cyber-crimes for fun and for the challenge of breaking into others' systems. Hackers can be "good person" or "white hat" hackers as well as "bad person" or "black hat" hackers. Computer and System Attack Methods—Some of the more common logical (not physical facility) attack methods appear below. We consider physical facility attack methods in a separate lesson. A. Access Issues—These concern issues gaining access to computing resources. 1. Back door or trapdoor—A software program that allows an unauthorized user to gain access to the system by sidestepping the normal logon procedures. Historically, programmers used back doors to facilitate quick access to systems under development. If left in a system or installed by a hacker, they enable unauthorized access. 2. Botnets (or zombie computers)—A collection of computers under the control of a perpetrator that is used to perpetrate DoS, adware, spyware, and spam attacks. 3. Eavesdropping—The unauthorized interception of a private communication, such as a phone call, email, or instant message transmission. 4. Hacking or cracking—Malicious or criminal unauthorized penetration of computer systems and networks. 5. Hijacking—An attacker takes control of the system, network, or communication. 6. Identity theft—Using someone else's identity (e.g., Social Security number) to create credit cards, establish loans, or enter into other transactions. 7. Keystroke loggers—Software that records keystrokes (e.g., to steal passwords or logon information). 8. Man-in-the-middle attack—A perpetrator establishes a connection between two devices and then pretends to be each party, thereby intercepting and interfering with messages between the parties (e.g., to steal passwords or credit card information). a. Packet sniffing—Programs called packet sniffers capture packets of data as they move across a computer network. While administrators use packet sniffing to monitor network performance or troubleshoot problems with network communications, hackers also use these tools to capture usernames and passwords, IP addresses, and other information that can help them break into the network. Packet sniffing on a computer network is similar to wiretapping a phone line. This is one form of a man-in-the-middle attack. 9. Masquerading or impersonation—Pretending to have authorization to enter a secure online location. 10. Password crackers—Once a username has been identified, hackers can use password-cracking software to generate many potential passwords and use them to gain access. Password cracker programs are most effective with weak passwords (i.e., passwords with fewer than eight characters, that use one letter case, or that do not require numbers or special symbols). 11. Phishing—A deceptive request for information delivered via email. The email asks the recipient to either respond to the email or visit a website and provide authentication information. You probably get several phishing queries every week. 12. Phreaking—A form of hacking on telephone communications (e.g., tapping into a conversation or getting phone services for free). 13. Session masquerading and hijacking—Masquerading occurs when an attacker identifies an IP address (usually through packet sniffing) and then attempts to use that address to gain access to the network. If the masquerade is successful, then the hacker has hijacked the session—gained access to the session under the guise of another user. 14. Malicious software (malware)—Programs that exploit system and user vulnerabilities to gain access to the computer; there are many types of malware. 15. Social engineering or spoofing—Using deceit or deception to gain logical access to the system. The deception is to persuade employees to provide usernames and passwords to the system. These deceptive requests may be delivered verbally or through email, text messaging, or social networking sites. Fraudsters may spoof by faking an identity (e.g., a company or friend) or an email (e.g., pretending to be your bank or a friend of yours) or by creating a website that mimics a real website. 16. Spyware—Adware that watches a user's computer activity without proper consent, reporting the information to the creator of the spyware software. 17. Trojan horse—A malicious program hidden inside a seemingly benign file. Frequently used to insert back doors into a system (see above). 18. War chalking, driving, and walking—Multiple methods for identifying access points in order to gain unlawful access to wireless networks. Service issues concern efforts to block users' access to computing resources. 1. Denial of service (DoS) attack—Rather than attempting to gain unauthorized access to IT resources, some attackers threaten the system by preventing legitimate users from accessing the system. Perpetrators instigate these attacks, using one or many computers, to flood a server with access requests that cannot be completed. These include ransom and blackmail DoS attacks in which the criminal threatens to deny service unless the user pays a ransom or engages in a specific act (e.g., grants access to their system). 2. Email bombing or spamming—Attacking a computer by sending thousands or millions of identical emails to the address. Data issues concern efforts to inappropriately access or change data. 1. Data diddling—Changing data in an unauthorized manner (forgery) either before or during input into the computer system (e.g., changing credit ratings or fixing salaries). 2. Data leakage—Uncontrolled or unauthorized transmission of classified information from a data center or computer system to outside parties (e.g., remote employees increase the risk of data leakage if networks are unsecure). Asset, data, and hardware destruction and misuse damage or destroy assets and resources. 1. Logic bomb—An unauthorized program planted in the system; the logic bomb lies dormant until the occurrence of a specified event or time (e.g., a specific date, the elimination of an employee from active employee status). 2. Salami fraud—Taking a small amount of money from many accounts using a variety of rounding methods 3. Software piracy—Unauthorized copying of software 4. Spam—Unsolicited mass emailings; the volume of spam can make systems unusable 5. Superzapping—The use of powerful software to access secure information while bypassing normal controls 6. Virus—An unauthorized program, usually introduced through an email attachment, that copies itself to files in the user's system; these programs may actively damage data, or they may be benign. 7. Worm—Similar to viruses except that worms attempt to replicate themselves across multiple computer systems. Worms generally try to accomplish this by activating the system's email client and sending multiple emails. Worms generally exist inside of other files; unlike viruses, they are not stand-alone programs. Cyber-Incident Responses A. A cyber-incident is a violation of an organization's security policy (Source: Software Engineering Institute, Carnegie Mellon University). The well-managed organization will have a cyber-incident protocol (i.e., response and action plan) as a part of its business continuity plan. This protocol will specify the level of the incident. B. An action plan for a cyber-incident will likely include: 1. Planning for and testing of the protocol including specification of the response team 2. Event detection procedures 3. Event logging procedures (i.e., who, what, where, when, and how). For example, verifying that a claimed event occurred, describing the event, the systems affected, and the period of outages. 4. Triage and incident analysis, including a severity classification of the level of the event (e.g., from a minor to a crisis event) with a set of actions following from the event's classification. Will also include an assessment of the likelihood of repetition and a search of information sources regarding the incident. 5. Containment and removal of threats 6. Decision and action regarding event announcement or secrecy 7. Recovery from incidents 8. Closure 9. Event reporting 10. Monitoring and system revisions, as needed Strategies for Preventing and Detecting Computer Crime A. Given the importance of information assets, organizations must invest to protect and defend their information systems. Strategies for decreasing the likelihood of crime and reducing losses include: 1. Make crime harder (i.e., less likely)—By creating an ethical culture, adopting an appropriate organizational structure, requiring active oversight, assigning authority and responsibility, assessing risk, developing security policies, implementing human resource policies, supervising employees effectively, training employees, requiring vacations, implementing development and acquisition controls, and prosecuting fraud perpetrators vigorously. 2. Increase the costs (difficulty) of crime—By designing strong internal controls, segregating duties, restricting access, requiring appropriate authorizations, utilizing documentation, safeguarding assets, requiring independent checks on performance, implementing computer-based controls, encrypting data, and fixing software vulnerabilities. 3. Improve detection methods—By creating an audit trail, conducting periodic audits, installing fraud detection software, implementing a fraud hotline, employing a computer security officer, monitoring system activities, and using intrusion detection systems. 4. Reduce fraud losses—By maintaining adequate insurance, developing disaster recovery plans, backing up data and programs, and using software to monitor system activity and recover from fraud.

Databases and Data Structures

Data Structures in Accounting Systems—All information and instructions used in IT systems are executed in binary code (i.e., zeros and ones). This section looks at how the zeros and ones are strung together to create meaning. A. Bit (binary digit)—An individual zero or 1; the smallest piece of information that can be represented. B. Byte—A group of (usually) eight bits that are used to represent alphabetic and numeric characters and other symbols (3, g, X, ?, etc.). Several coding systems are used to assign specific bytes to characters; ASCII and EBCIDIC are the two most commonly used coding systems. Each system defines the sequence of zeros and ones that represent each character. C. Field—A group of characters (bytes) identifying a characteristic of an entity. A data value is a specific value found in a field. Fields can consist of a single character (Y, N) but usually consist of a group of characters. Each field is defined as a specific data type. Date, Text, and Number are common data types. D. Record—A group of related fields (or attributes) describing an individual instance of an entity (a specific invoice, a particular customer, an individual product). E. File—A collection of records for one specific entity (an invoice file, a customer file, a product file).In a database environment, files are sometimes called tables. F. Database—A set of logically related files. Study Tip Except for "file," the words get longer as the units get bigger: Bit (3 characters) Byte (4 characters) Field (5 characters) Record (6 characters) File (4 characters) Database (8 characters) Flat and Proprietary files A. Data in relational databases is stored in a structured, "normalized" form. Normalization means that the data is stored in an extremely efficient form that minimizes data redundancy and helps prevent data errors. But the problem with normalized data is that it is difficult to use—outside of the database. B. To share data outside of a relational database, data is often stored as text using a delimiter to separate the fields. This is called a "flat file." 1. Two examples of flat file types are CSV (comma-separated values) and TSV (tab-separated values), where the delimiters for these file types are (obviously) commas ("," the "C" in CSV) and tabs (" " the "T" in TSV). 2. AICPA Audit Data Standards recommend the use of a "pipe" (i.e., "|") as a delimiter for flat files since the pipe is rarely used in Western languages (e.g., English). 3. Flat files are great for sharing simple data sets, but they are inefficient in their storage of complex data sets since flat files are intentionally not normalized. So, flat files are easy to use but inefficient for complex data sets and they include data redundancy. C. Many types of propriety file types also exist for sharing files. These files types are created by organizations or individuals within specific software packages. 1. Currently, the most common proprietary file type for file sharing is Microsoft Excel (.xls or .xlsx). Another example of a proprietary file type is PDF (i.e., portable document form), which was created by Adobe. 2. An advantage of sharing Excel files is that almost every user can open and share them. 3. A disadvantage of sharing Excel files is that they are limited to about 1 million rows, which seems like a lot of data, until one starts working with "big data" sets. Databases A. A Set of Logically Related Tables (or Files)—Most business data is highly interrelated, and consequently, most business data is stored in databases. Database Management System 1. A system for creating and managing a well-structured database. A "middle-ware" program that interacts with the database application and the operating system to define the database, enter transactions into the database, and extract information from the database; the DBMS uses three special languages to accomplish these objectives: a. Data definition language (DDL)—Allows the definition of tables and fields and relationships among tables. b. Data manipulation language (DML)—Allows the user to add new records, delete old records, and to update existing records. c. Data query language (DQL)—Allows the user to extract information from the database; most relational databases use structured query language (SQL) to extract the data; some systems provide a graphic interface that essentially allows the user to "drag and drop" fields into a query grid to create a query; these products are usually called query-by-example (QBE). 2. Some examples of relational database management systems a. Microsoft SQL server b. Oracle database c. Oracle MySQL d. Teradata 3. Database controls—A DBMS should include the following controls: a. Concurrent access issues management—For example, to control multiple users attempting to access the same file or record. These are to prevent record change logouts and errors. b. Access controls—i.e., tables that specify who can access what data within the system. Used to limit access to appropriate users. c. Data definition standards—For example, to determine and enforce what data elements must be entered and which are optional. These improve and ensure data quality. d. Backup and recovery procedures—To ensure integrity in the event of system errors or outages. These prevent data loss and corruption. e. Update privileges—Define who can update the data, and when the data can and should be updated. These are used to control data changes and improve data integrity. f. Data elements and relationships controls—To ensure data accuracy, completeness, and consistency. F. Database Terminology and Principles 1. A view is a presentation of data that is selected from one or more tables. 2. A view is often created with a query—which is a request to create, manipulate, or view something in a database. 3. The schema is the structure of the database. That is, what are the tables in the database and how do the tables connect with one another? 4. How do the tables connect with one another? They are connected through keys (primary and foreign). Keys are a field or column that uniquely identifies every record (or row) in a database. a. For example, imagine that have a table of customers and table of customer orders. The primary key of customer ID will uniquely identify every customer in the customer table and will connect (link) the customer table to the orders table. b. A foreign key is an attribute in a table that is used to find a record (or row) in a table. A foreign key does not uniquely identify a record or row in a table. A foreign and secondary key are the same thing. c. While customer ID is the primary key of the customer table, it will be a foreign key in the orders table. The primary key of the orders table will order ID. d. The customer ID (the foreign key in the orders table) will connect the order table to the customer table, but not every order in the order table will have a unique customer ID (since customers, we hope, will place more than one order). However, every order will have a unique order ID, which is why it is the primary key for the orders table. e. Finally, a secondary key is an attribute in a table that is used to find a record (or row) in a table. A secondary key does not uniquely identify a record or row in a table. For example, in an order table, the order number will be the primary key and the customer ID will be a secondary key. A foreign key are different terms for the same thing.

Introduction to Enterprise-Wide and Cloud-Based Systems

Enterprise Architecture—An organization's enterprise architecture is its effort to understand, manage, and plan for IT assets. An organization's IT security governance plan must articulate with, and be informed by, the organization's enterprise architecture plan. II. Enterprise-Wide or Enterprise Resource Planning (ERP) Systems—ERPs provide transaction processing, management support, and decision-making support in a single, integrated, organization-wide package. By integrating all data and processes of an organization into a unified system, ERPs attempt to manage and eliminate the organizational problem of consolidating information across departments, regions, or divisions.. A. Goals of ERP systems: 1. Global visibility—The integration of all data maintained by the organization into a single database; once the data is in a single database—which binds the whole organization together. Once it is integrated into a single database, the data are available to anyone with appropriate authorization. 2. Cost reductions—Long-run systems maintenance costs are reduced by eliminating the costs associated with maintaining multiple systems. 3. Employee empowerment—Global visibility of information improves lower-level communication and decision making by making all relevant data available to the employee; this empowers the employee and, in turn, makes the organization more agile and competitive in a volatile business environment. 4. "Best practices"—ERP systems processes are based on analysis of the most successful businesses in their industry; by adopting the ERP system, the organization automatically benefits from the implementation of these "best practices." B. Components of an ERP System—ERP systems are typically purchased in modules (i.e., Sales, Logistics, Planning, Financial Reporting, etc.); ERP vendors design their systems to be purchased as a unit, that is, from a "single source"; however, many organizations pick and choose ERP modules from several vendors according to their perception of how well the ERP model fits with their organization's way of doing business, a practice called choosing the "best of breed." 1. Online transaction processing (OLTP) system—The modules comprising the core business functions: sales, production, purchasing, payroll, financial reporting, etc. These functions collect the operational data for the organization and provide the fundamental motivation for the purchase of an ERP. 2. Online analytical processing (OLAP) system—Incorporates data warehouse and data mining capabilities within the ERP. C. ERP System Architecture—ERP systems are typically implemented using a client/server network configuration; although early implementations generally utilized proprietary LAN and WAN technologies, current implementations often use Internet-based connections. 1. ERPs may use two-tiered or three-tiered architectures; because two-tiered systems combine programs and data on a single server, three-tiered architecture is preferred, particularly in larger systems. III. Cloud-Based Systems and Storage—An organization's decision to deploy cloud-based systems should flow from its enterprise architecture plan which should include consideration of an IT sourcing strategy. An IT sourcing strategy concerns the organization's plan to insource, outsource, or pursue a hybrid strategy (mixed insourcing and outsourcing) in relation to its IT assets. A. Cloud systems are also called the cloud, cloud computing, cloud storage, and cloud services. In cloud-based storage, a virtual data pool is created by contracting with a third-party data storage provider. When managed well, such an approach uses outsourcing to gain the benefits of relying on a professionally managed data storage provider. When executed well, this approach can provide a massive, universally accessible data store while minimizing the risks of unauthorized access by intruders, at a reasonable cost. When managed poorly, risks include data loss, system penetration and access by intruders, and failures of internal control, including inadequate monitoring and testing of system risks. Such a system might also include the creation of a virtual private network (VPN) to limit access to the system and encrypt sensitive information. 1. Cloud service providers (CSPs)—Vendors who provide cloud services (e.g., Amazon Cloud, Dropbox). B. Cloud Deployment Models 1. Private cloud—The cloud infrastructure exists solely for an individual organization. 2. Community cloud—A cloud infrastructure that is shared by users in a specific community (e.g., municipal governments, an industry association, related to compliance requirements). 3. Public cloud—The cloud infrastructure available to the public or a large industry group (e.g., Dropbox, Amazon Cloud services). 4. Hybrid cloud—A cloud that includes two or types of the above types of clouds with partitions between the types of services. 5. Cloud of clouds (intercloud)—A linked network of cloud storage systems. C. Cloud Service Delivery Models—Describe the services that a client contracts for: 1. Infrastructure as a service (IaaS)—Use of the cloud to access a virtual data center of resources, including a network, computers, and storage. Example: Amazon Web Services and Carbonite. 2. Platform as a service (PaaS)—A development environment for creating cloud-based software and programs using cloud-based services. Example: Salesforce.com's Force.com. 3. Software as a service (SaaS)—Remote access to software. Office 365, a suite of office productivity programs, is an example of SaaS. D. Benefits of Cloud-Based Systems: 1. Universal access—System data is available at any site with Internet access. 2. Cost reductions—Long-run systems maintenance costs are reduced by eliminating the costs associated with maintaining multiple systems. 3. Scalability—Cloud-based systems are highly scalable, meaning that they grow with an organization. Specifically, organizations can buy only the capabilities and storage that they currently need but can contract for expansion as organizational needs evolve. 4. Outsourcing and economies of scale—Cloud-based systems allow organizations to outsource data storage and management to organizations with the capabilities and competencies to manage these facilities. By outsourcing to organizations who specialize in cloud services, economies of scale may be obtained, wherein the cloud provider realizes cost benefits, some of which are passed on to the organization who purchases cloud services. An additional benefit is therefore, obviously, cost savings due to a reduced need for internal IT personnel. 5. Enterprise-wide integration—Cloud-based systems can be integrated with enterprise- wide systems to allow the seamless integration of organizations across units and geography. Indeed, some argue that fully realizing the benefits of ERPs requires cloud-based systems. 6. Deployment speed—Typically, CSPs can provide services much faster than organizations that attempt to duplicate these services using internal IT departments. E. Risks of Cloud-Based Systems 1. The risk of data loss and outages is potentially increased by putting all of one's data online with one vendor. In reality however, most large cloud-based vendors store data at multiple sites, meaning that the risks of data loss with cloud-based systems, while important, may not be as great as might be feared. 2. There is an increased risk of system penetration by hackers, crackers, and terrorists when all one's data is stored with one vendor. 3. Cloud-based systems rely on the competence, professionalism, reliability, viability, and transparency of the CSP. Hence, diligence in vendor screening and selection is essential to cloud-computing security and success. In addition, CSPs may be unwilling to divulge details of their operations that auditors (internal or external) require to ensure that an organization's data is stored safely. 4. Data stored in community and public clouds may be vulnerable to actions by other tenants of the CSP. This may create legal issues related to data privacy and data availability. 5. Storing data with a high-profile CSP (e.g., Amazon Cloud) can make one a high-profile target for cyber-attackers. F. Summary—Despite the risks, many organizations are moving their primary storage to cloud-based systems. The online analytical processing system (OLAP) incorporates data warehouse and data mining capabilities within the ERP. The online transaction processing system (OLTP) records the day-to-day operational transactions and enhances the visibility of these transactions throughout the system. It is primarily the OLAP and not the OLTP, that provides an integrated view of transactions in all parts of the system. The OLTP is primary concerned with collecting data (and not analyzing it) across the organization.

The Internet—Structure and Protocols

Key Internet Protocols 1. Transmission Control Protocol/Internet Protocol (TCP/IP)—Two core network protocols that underlie the Internet. a. Hypertext Transfer Protocol (HTTP)—The foundation protocol for data transmission on the Internet. Part of the TCP/IP protocol. b. Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP)—Protocols for e-mail services. Part of the TCP/IP protocol. c. In a packet-switched network, information is grouped into packets for transmission. TCP/IP is a packet-switched network protocol. The Internet is the world's largest packet-switched network. 2. Extensible Markup Language (XML)—Protocol for encoding (tagging) documents for use on the Internet. a. Extensible Business Reporting Language (XBRL)—XML-based protocol for encoding and tagging business information. A means to consistently and efficiently identifying the content of business and accounting information in electronic form. i. Extensible means that users can create taxonomies for specific environments, for example, for taxation reporting, environmental regulation reporting, or automobile manufacturing. ii. XBRL is used in filings with the Securities and Exchange Commission that are made available on EDGAR, the SEC's Electronic Data Gathering and Retrieval database. iii. Some companies now report their financial statements in both paper and XBRL formats. 3. Other protocols a. Hypertext Markup Language (HTML)—Core language for web pages. The basic building-block protocol for constructing web pages. b. File Transfer Protocol (FTP)—Used for file transfer applications c. Instant messaging (IM)—Protocol for instant messaging d. Uniform Resource Locator (URL)—A protocol for finding a document by typing in an address (e.g., www.azdiamondbacks.com). URLs work like addresses on mail processed by the post office. e. World Wide Web (the web or WWW)—A framework for accessing linked resources (e.g., documents, pictures, music files, videos, etc.) spread out over the millions of machines all over the Internet f. Web browser—Client software (e.g., Internet explorer, Firefox, Chrome, Mosaic, etc.) that provides the user with the ability to locate and display web resources g. Web servers—The software that "serves" (i.e., makes available) web resources to software clients. Web servers (e.g., Apache and Internet Information Server [IIS]) typically run on "server" hardware. However, many computing devices today support their own web server software. Intranets and extranets—Intranets and extranets are private (e.g., limited access) networks built using Internet protocols. Therefore, users can access network resources through their web browser rather than a proprietary interface. This substantially reduces training time for users and system development time for programmers. Thus, intranets and extranets are rapidly replacing traditional proprietary LANs and WANs: 1. Intranets—Available only to members of the organization (business, school, association); intranets are often used to connect geographically separate LANs within a company. 2. Extranets—Intranets that are opened up to permit associates (company suppliers, customers, business partners, etc.) to access data that is relevant to them. Web 2.0— The so-called second generation of the web. Refers to web-based collaboration and community-generated content using web-based software tools such as: A. Blog—An asynchronous discussion, or web log, led by a moderator that typically focuses on a single topic. Similar to an electronic bulletin board. An efficient way to share information, views, and opinions. B. Wiki—An information-gathering and knowledge-sharing website developed collaboratively by a community or group, all of whom can freely add, modify, or delete content. C. Twitter—A micro-variation of a blog. Restricts input (tweets) to 280 characters (at the time this lesson was written) and its common use is to "follow" friends, politicians, and celebrities. Increasingly, companies use Twitter to inform followers and to brand themselves and market their services. D. RSS (Really Simple Syndication)/ATOM Feeds—An easy way to get news and information. An XML application that facilitates the sharing and syndication of website content by subscription. RSS-enabled client software (including most browsers and RSS readers) automatically checks RSS feeds for new website content on a regular basis.

Artificial Intelligence and Machine Learning

RPA = robotic process automation—this is the replacement of humans with AI and robotics technology Categories of AI a. Machine learning (analysis)—Systems that use big data to learn rules and categories to enable prediction and classification. For example: neural networks. A common accounting application —classifying journal entries b. Robotics (activity)—For example: machine-directed welding, controlling production, manufacturing, and distribution processes c. Intelligent agents (engagement)—Computer "agents" that perform tasks—e.g., data harvesting and cleaning. Can also analyze market trends—e.g., in finding and purchasing airline tickets. Such systems interact with humans (e.g., Siri® on the Apple® iPhone®) and have natural language processing ability. d. Expert systems (analysis and activity)—Build and apply expertise in a domain. May include machine learning or intelligent agent subsystems. One classification identifies the level of intelligence exhibited by the AI hardware and software. (See the "Types of AI and Their Intelligence Levels" table) 1. Data harvesting and cleaning a. Includes tasks such as acquiring or extracting data (e.g., pulling it from databases or websites) and cleaning (e.g., putting it in usable form) and validating it (confirming its accuracy). In the next generation of audits, n (the sample size) will be "all the data." Why sample when AI technologies make the entire population available for analysis? 2. Analyzing numbers a. Includes common financial and nonfinancial data analysis using, for example, advanced Excel functions and the Tableau software. For example, Netflix suggests what you should watch next based on quantitative data about your ratings and viewing habits. 3. Analyzing words, images, and sounds a. Includes the analysis of natural language disclosures in financial statements (e.g., 10-K SEC filings) and from audio files of conference calls (e.g., of CEOs and CFOs presenting and answering questions about their companies). Other examples include analyzing vitaes for hiring decisions and Google Translate. 4. Performing digital tasks a. Includes business process reengineering to improve efficiency and effectiveness. For example, automating security systems to improve recognition and minimize errors through multifactor, automated identification. An area of emerging application. Another example: AI systems have reduced costs by help lawyers and financial analysts review loan contracts (e.g., at JPMorgan Chase). 5. Performing physical tasks a. Includes applications of robotics and drones, which include both physical devices and IoT sensing and reporting technologies. The most publicized example is self-driving cars. 6. Self-aware AI a. Does not exist and likely will not exist for 20 to 100 years. Such systems would rival (and eventually exceed?) human intelligence. AI Benefits and Risks A. AI Benefits 1. Speed, accuracy, and cost. Ability to scale up and speed up applications and to reduce costs. Ability to obtain, clean, and analyze large (all available?) data in real time. Apply AI to robotics, pattern recognition, and complex communication (e.g., natural languages) problems. B. Risks of Implementing AI 1. Short term—AI systems often include the biases of their developers. Hence, AI systems must be monitored, reviewed, and validated to identify and correct such biases, which can include: 1. Data biases—Harvesting and creating data sets that omit relevant variables and considerations.3 2. Prediction biases—Systems that include biased data will obviously generate biased predictions. In addition, the reasoning built into the system can reflect the biases of developers. 3. Learned (or "emergent" or "confirmation") biases—Smart machines will learn and integrate the biases of those who train them. Hence, machines can be trained to "confirm" the biases of those who train them. 2. Medium term a. Lost jobs and disappearing employment—Some economists and computer scientists argue that many jobs will be lost to AI. Others argue that many jobs will be displaced but not lost. b. Legal and ethical issues—Who is liable when the AI screws up? How can AI systems be integrated into the workforce without disruption? How can privacy be protected when AI systems manage processes? Who monitors the (AI) monitors? 3. Long term—The end of humanity? Some experts, including Elon Musk, Bill Gates, and Stephen Hawking argue that AI may lead to the end of humans, as machines exceed and replace human intelligence and, ultimately, humans (like "SkyNet" in the Terminator movie series). These experts argue that AI development should be regulated and controlled.

IT Security Principles

Security A. Security is the foundation of systems reliability. Security procedures restrict access to authorized users only, protect the confidentiality and privacy of sensitive information, provide integrity of information, and protect against attacks. B. Security is a top management issue. Management is responsible for the accuracy of the internal reports and financial statements produced by the organization's information system (IS). Management must certify the accuracy of the financial statements and maintain effective internal controls. Privacy addresses whether the system's collection, use, retention, disclosure, and disposal of personal information conforms to its own commitments and with criteria set forth in generally accepted privacy principles (GAPP). An essential first step in assessing privacy issues is to inventory an organization's relevant data and to understand the laws and regulations to which these data are subject. GAPP includes these 10 subprinciples: A. Management—The entity defines, documents, communicates, and assigns accountability for its privacy policies and procedures. B. Notice—The entity tells people about its privacy policies and procedures and explains why personal information is collected, used, retained, and disclosed. C. Choice and Consent—Give customers and users a choice to opt out (United States) or opt in (Europe) to the collection of their personal information. D. Collection—The entity collects personal information only for its identified purposes. E. Use and Retention—The entity uses personal information consistent with its statements about use. It retains personal information only as long as needed or allowed by law or regulations. F. Access—People can access, review, and update their information. G. Disclosure to Third Parties—Third parties receive information only according to policy and individual consent. H. Security for Privacy—Management takes reasonable steps to protect personal information against unauthorized access. I. Quality—Personal information is accurate, complete, and relevant. J. Monitoring and Enforcement—Someone monitors the entity's compliance with its privacy policies and procedures and has procedures to address privacy-related complaints and disputes. Categories of Criteria for Assessing IT Security Principles—The ASEC also specifies the categories of the criteria that can be used (e.g., in an audit or control review engagement) to assess the achievement of the trust services principles. These criteria are organized into seven categories, discussed next, all of which are discussed in other CPAExcel® BEC lessons. A. Organization and Management—The organizational structures and processes for managing and supporting people within its operating units. This includes criteria addressing accountability, integrity, ethical values and qualifications of personnel, and the operational conditions in which they function. B. Communications—How the organization communicates its policies, processes, procedures, commitments, and requirements to authorized users and other parties engaged with the system. C. Risk Management, and Design and Implementation of Controls—How the entity: 1. Identifies potential risks that would affect the entity's ability to achieve its objectives 2. Analyzes those risks 3. Develops responses to those risks including the design and implementation of controls and other risk-mitigating actions 4. Conducts ongoing monitoring of risks and the risk management process D. Control Monitoring—How the entity monitors the system, including the suitability and design and operating effectiveness of the controls, and takes action to address identified deficiencies. This is discussed, in depth, in the COSO control monitoring lessons. E. Logical and Physical Access Controls—How the entity restricts logical and physical access to the system, provides and removes that access, and prevents unauthorized access to the system. F. System Operations—How the entity manages the execution of system procedures and detects and mitigates processing deviations, including logical and physical security deviations. G. Change Management—How the organization identifies the need for changes to the system, manages these changes according to a controlled process, and prevents unauthorized changes from being made. Preventive, Detective, and Corrective Controls in IT Systems The categories of preventive, detective, and corrective controls are also important in IT systems and are discussed in the "Types and Limitations of Accounting Controls" lesson. Next we discuss a couple of illustrations of the application of these control categories to IT systems. The content below assumes that you can define preventive, detective, and corrective controls. A. Time-Based Model of Controls 1. Given enough time and resources, most preventive controls can be circumvented. Accordingly, detection and correction must be timely. 2. The time-based model evaluates the effectiveness of an organization's security by measuring and comparing the relationships among the three categories of controls: a. P = Time it takes an intruder to break through the organization's preventive controls b. D = Time it takes to detect that an attack is in progress c. C = Time to respond to the attack 3. If P > (D + C), then security procedures are effective. Otherwise, security is ineffective. B. Defense-in-Depth—The strategy of implementing multiple layers of controls to avoid having a single point of failure. 1. Computer security involves using a combination of firewalls, passwords, and other preventive procedures to restrict access. Redundancy also applies to detective and corrective controls. 2. Some examples of IT preventive controls used for defense-in-depth include: a. Authentication controls to identify the person or device attempting access b. Authorization controls to restrict access to authorized users. These controls are implemented with an access control matrix and compatibility tests. c. Training in why security measures are important and teach them to use safe computing practices d. Physical access controls to protect entry points to the building, to rooms housing computer equipment, to wiring, and to devices such as laptops, cell phones, and PDAs. e. Remote access controls include routers, firewalls, and intrusion prevention systems to prevent unauthorized access from remote locations C. Preventive controls are imperfect, so organizations implement controls to enhance security by monitoring the effectiveness of preventive controls and detecting incidents in which they have been circumvented. Examples of IT detective controls include: 1. Log analysis (i.e., audit log)—The process of examining logs that record who accesses the system and the actions they take. 2. Intrusion detection systems (IDS)—Automate the monitoring of logs of network traffic permitted to pass through the firewall. The most common analysis is to compare the logs to a database containing patterns of known attacks. 3. Managerial reports—Can be created to disclose the organization's performance with respect to the COBIT objectives. Key performance indicators include downtime caused by security incidents, the number of systems with IDS installed, and the time needed to react to security incidents once they are reported. 4. Security testing—Include vulnerability scans and penetration testing.

IT Functions and Controls Related to People

Segregation of Functions—The functions in each area must be strictly segregated within the IT department. Without proper segregation of these functions, the effectiveness of additional controls is compromised. Applications Development—This department is responsible for creating new end-user computer applications and for maintaining existing applications. 1. Systems analysts—Responsible for analyzing and designing computer systems; systems analysts generally lead a team of programmers who complete the actual coding for the system; they also work with end users to define the problem and identify the appropriate solution. 2. Application programmers—Work under the direction of the systems analyst to write the actual programs that process data and produce reports. 3. New program development, and maintenance of existing programs, is completed in a "test" or "sandbox" environment using copies of live data and existing programs rather than in the "live" system. Systems Administration and Programming—This department maintains the computer hardware and computing infrastructure and grants access to system resources. 1. System administrators—The database administrator, network administrator, and web administrators are responsible for management activities associated with the system they control. For example, they grant access to their system resources, usually with usernames and passwords. System administrators, by virtue of the influence they wield, must not be permitted to participate directly in these systems' operations. 2. System programmers—Maintain the various operating systems and related hardware. For example, they are responsible for updating the system for new software releases and installing new hardware. Because their jobs require that they be in direct contact with the production programs and data, it is imperative that they are not permitted to have access to information about application programs or data files. 3. Network managers—Ensure that all applicable devices link to the organization's networks and that the networks operate securely and continuously. 4. Security management—Ensures that all components of the system are secure and protected from all internal and external threats. Responsibilities include security of software and systems and granting appropriate access to systems via user authentication, password setup, and maintenance. 5. Web administrators—Operate and maintain the web servers. (A web server is a software application that uses the hypertext transfer protocol (recognized as http://) to enable the organization's website. 6. Help desk personnel—Answer help-line calls and emails, resolve user problems, and obtain technical support and vendor support when necessary. D. Computer Operations—This department is responsible for the day-to-day operations of the computer system, including receipt of batch input to the system, conversion of the data to electronic media, scheduling computer activities, running programs, etc. 1. Data control—This position controls the flow of all documents into and out of computer operations; for batch processing, schedules batches through data entry and editing, monitors processing, and ensures that batch totals are reconciled; data control should not access the data, equipment, or programs. This position is called "quality assurance" in some organizations. 2. Data entry clerk (data conversion operator)—For systems still using manual data entry (which is rare), this function keys (enters) handwritten or printed records to convert them into electronic media; the data entry clerk should not be responsible for reconciling batch totals, should not run programs, access system output, or have any involvement in application development and programming. 3. Computer operators—Responsible for operating the computer: loading program and data files, running the programs, and producing output. Computer operators should not enter data into the system or reconcile control totals for the data they process. (That job belongs to Data Control.) 4. File librarian—Files and data not online are usually stored in a secure environment called the file library; the file librarian is responsible for maintaining control over the files, checking them in and out only as necessary to support scheduled jobs. The file librarian should not have access to any of the operating equipment or data (unless it has been checked into the library). E. Functions in the three key functions (i.e., applications development, systems administration and programming, computer operations—should be strictly segregated. (This is a bit like the "cannibals and missionaries" problem from computer science and artificial intelligence.) In particular: 1. Computer operators and data entry personnel—Should never be allowed to act as programmers. 2. Systems programmers—Should never have access to application program documentation. 3. Data administrators—Should never have access to computer operations ("live" data). 4. Application programmers and systems analysts—Should not have access to computer operations ("live" data). 5. Application programmers and systems analysts—Should not control access to data, programs, or computer resources. Personnel Policies and Procedures—The competence, loyalty, and integrity of employees are among an organization's most valuable assets. Appropriate personnel policies are critical in hiring and retaining quality employees. A. Hiring Practices—Applicants should complete detailed employment applications and formal, in-depth employment interviews before hiring. When appropriate, specific education and experience standards should be imposed and verified. All applicants should undergo thorough background checks and verification of academic degrees, work experience, and professional certifications, as well as searches for criminal records. B. Performance Evaluation—Employees should be evaluated regularly. The evaluation process should provide clear feedback on the employee's overall performance as well as specific strengths and weaknesses. To the extent that there are weaknesses, it is important to provide guidance on how performance can be improved. C. Employee Handbook—The employee handbook, available to all employees, should state policies related to security and controls, unacceptable conduct, organizational rules and ethics, vacations, overtime, outside employment, emergency procedures, and disciplinary actions for misconduct. D. Competence—COSO requires that "Management ... [should] specify the competence levels for particular jobs and to translate those levels into requisite knowledge and skills. These actions help ensure that competent, but not over-qualified employees serve in appropriate roles with appropriate responsibilities." E. Firing (Termination)—Clearly, procedures should guide employee departures, regardless of whether the departure is voluntary or involuntary; it is especially important to be careful and thorough when dealing with involuntary terminations of IT personnel who have access to sensitive or proprietary data. In involuntary terminations, the employee's username and keycard should be disabled before notifying the employee of the termination to prevent any attempt to destroy company property. Although this sounds heartless, after notification of an involuntary termination, the terminated employee should be accompanied at all times until escorted out of the building. F. Other Considerations—Recruiting and retaining highly qualified employees is an important determinant of organizational success. Ensuring that an organization has training and development plans, including training in security and controls, is essential both to employee retention, and to creating a system of internal control.

Software, Data Warehouses, and Decision Support

Software A. Software—Instructions (i.e., programs) for hardware B. Computer Software—Divided into three categories: systems software, programming languages, and application software. C. Systems Software 1. The programs that run the computer and support system management operations. Several of the most frequently encountered types of systems software are a. The operating system—Interface between the user and the computer hardware. b. Database management systems (DBMS)—Discussed in the "Databases and data structures" lesson. c. Data communications software—This topic is discussed in the lesson on "Computer Networks and Data Communication" d. Utility programs—These programs perform maintenance (e.g., defragging a hard drive, finding and recognizing devices that are attached to a system) tasks. D. Programming Languages 1. All software is created using programming languages. They consist of sets of instructions and a syntax that determines how the instructions can be put together. Types and examples of programming languages include: a. High-level, general purpose languages, such as the C programming language b. Object-oriented languages such as C++, which are used to design modular, reusable software c. Integrated development environments, such as Java, which provide templates that can be used to automatically generate code d. Hypertext Markup Language (HTML), a tagging language, discussed in the lesson on "The Internet—Structure and Protocols" e. Scripting languages, such as PERL or Python, which are used to add functionality to web pages f. Fourth-generation programming languages, which are often used in database systems. An example command might be, "FIND ALL RECORDS WHERE ZIPCODE = 40703." g. Low-level languages (e.g., assembler or machine) have code that is similar to a computer's instruction set architecture. E. Application Software 1. The diverse group of end-user programs that accomplish specific user objectives. Can be general purpose (word processors, spreadsheets, databases) or custom-developed for a specific application (e.g., a marketing information system for a clothing designer). May be purchased off the shelf or developed internally. F. Software Licensing—Organizations should have policies and procedures to ensure that they comply with software copyright law. Failure to do so risks lawsuits and the organization's good name. II. Data warehouse—A repository (i.e., database) for structured, filtered data. Data warehouses and data lakes are also briefly discussed in the "Data Governance and Data Management" lesson A. Characteristics: a. Often an archive of an organization's operational transactions (sales, purchases, production, payroll, etc.) b. Often includes external data that relates to the organization's transactions, such as economic indicators, stock prices, exchange rates, market share, political issues, and weather conditions. B. Data mart—A subset of the data warehouse containing data preconfigured to meet the needs of specific departments, e.g., "marketing" or "product logistics" a. companies often support multiple data marts within their organization.3 C. Data lake - an unfiltered pool of big data. Data in a data lake is often "dirty data" (i.e., contains errors and omissions). It requires cleaning (i.e., an extract, transform, load (ETL) process) to be useful for business decisions. D. Terms associated with data warehouses: a. Metadata - data about data. b. Drill down - The ability to move from summary information to more granular information (i.e., viewing an accounts receivable customer balance and drilling down to the invoices and payments which resulted in that balance). c. Slicing and dicing (or "filtering") — The ability to view a single data item in multiple dimensions; for example, sales might be viewed by product, by region, over time, or by company d. Data mining — The process of performing statistical analysis and automatically searching large volumes of data to identify patterns and relationships among the data elements. Obviously, the ability to recognize patterns (using heuristics, algorithms, artificial intelligence, and machine learning) in data is essential to data mining. III. Decision Support Systems (DSSs) —DSSs provide information to support mid- and upper-level managers in managing non-routine problems and in long-range planning. A. DSSs frequently include external data and summarized information from an organization's transaction processing system and include significant analytical and statistical capabilities. B. Types of DSSs 1. Data-driven DSSs—Process large amounts of data to find relationships and patterns. 2. Model-driven DSSs —Feed data into a previously constructed model to predict outcomes. For example, if a company increases its product price, by how much will demand for the product decrease? 3. Executive support systems (ESSs) or strategic support systems (SSSs) —A subset of DSSs especially designed for forecasting and making long-range, strategic decisions; thus, these systems have a greater emphasis on external data. 4. Group support systems (GSSs)—Facilitate group collaboration of users' information to generate analyses, recommendations, or solutions. These systems may include functions such as calendars, meeting scheduling, and document sharing.

Program Library, Documentation, and Record Management

Source code programs are normally maintained in a library under secure storage (the source program library, or SPL) that is maintained by a file librarian. The library, or an archive of the library, should be off-site and built to withstand fires, floods, and other natural disasters. It (obviously) must also include the same logical and physical controls as are built into the organization's other data processing and storage sites. Purpose of Documentation A. Documentation of the Accounting System is Required 1. By law, for example in the Foreign Corrupt Practices Act, and SOX 2. To build and evaluate complex systems 3. For training 4. For creating sustainable/survivable systems 5. For auditing (internal and external) 6. For process (re)engineering Levels of Documentation—Four levels of documentation should be maintained; documentation at each level generally includes flowcharts and narrative descriptions. A. Systems Documentation—Overviews the program and data files, processing logic and interactions with each other's programs and systems; often includes narrative descriptions, flowcharts, and data flow diagrams; used primarily by systems developers; can be useful to auditors. B. Program Documentation—A detailed analysis of the input data, the program logic, and the data output; consists of program flowcharts, source code listings, record layouts, etc.; used primarily by programmers; program documentation is an important resource if the original programmer is unavailable. C. Operator Documentation (Also Called the "Run Manual")—In large computer systems, operator documentation provides information necessary to execute the program such as the required equipment, data files and computer supplies, execution commands, error messages, verification procedures, and expected output; used exclusively by the computer operators. D. User Documentation—Describes the system from the point of view of the end user; provides instructions on how and when to submit data and request reports, procedures for verifying the accuracy of the data and correcting errors. Note All of the preceding controls are general and preventive. Forms of Documentation—Multiple forms of documentation facilitate the process of creating, documenting, evaluating, and auditing accounting systems. Important forms of documentation include the following: A. Questionnaires—Ask about use of specific procedures. B. Narratives—Text descriptions of processes. C. Data Flow Diagrams (DFDs) 1. Portray business processes, stores of data, and flows of data among those elements 2. Often used in developing new systems 3. Use simple, user-friendly symbols (unlike flowcharts) 4. For example, a DFD for the delivery of goods to a customer would include a symbol for the warehouse from which the goods are shipped and a symbol representing the customer. It would not show details, such as computer processing and paper output. D. Flowcharts 1. For example, system flowcharts, present a comprehensive picture of the management, operations, information systems, and process controls embodied in business processes. 2. Often used to evaluate controls in a system 3. Too complicated and technical for some users. DFDs are easier to understand. E. Entity-Relationship (E-R) Diagrams—Model relationships between entities and data in accounting systems. F. Decision Tables—Depict logical relationships in a processing system by identifying the decision points and processing alternatives that derive from those decision points. Record Retention and Destruction A. Source documents, preferably in electronic form, must be retained as required by organizational policy, business needs, external or internal audit requirements, or applicable law (e.g., the IRS or HIPAA) or regulation. B. Organizations must have a plan to ensure that retained records are kept confidential and secure often under the logical control of the originating departments. C. In some cases, laws, regulations, or good business practices require the regular, systematic destruction of old records (e.g., medical or credit histories, juvenile or criminal records). Record destruction must follow a systematic, controlled process and must not be haphazard (e.g., thrown into a dumpster). After changes and verification to those changes, source programs move into production. The management of changes to applications is part of the Source Program Library Management System (SPLMS). The practice of authorizing changes, approving tests results, and copying developmental programs to a production library is program change control.

System Development and Implementation

Sourcing Decisions—In some cases, an organization's IT strategy will indicate that it intends to build (i.e., develop) or buy software within certain domains or units. If such a strategy exists, it will indicate a preference that business applications development is insourced (i.e., an internal development process) or outsourced (i.e., purchased from someone else). The systems development life cycle (SDLC) framework (though not each step) applies to either insourced or outsourced development processes. Developing Business Applications—The importance, and potential negative consequences, of systems development is evident in the many large-scale systems failures that have cost organizations millions of dollars (e.g., the Denver airport baggage system, ERP at Hershey's, the Bank of America Trust Department). Developing a functioning computer system, on time, and on budget requires communication and coordination among multiple groups of people with very different points of view and priorities. Without a clear plan for defining, developing, testing, and implementing the system, it is perilously easy to end up with a system that fails to meet its objectives and must be scrapped. The systems development life cycle is designed to provide this plan. Purpose of the Systems Development Life Cycle (SDLC) Method—The systems development life cycle provides a structured approach to the systems development process by: A. Identifying the key roles in the development process and defining their responsibilities B. Establishing a set of critical activities to measure progress toward the desired result C. Requiring project review and approval at critical points throughout the development process. Before moving forward to each stage of the SDLC, formal approval for the previous stage should occur and be documented. Roles in the SDLC Method—Each party to the development process must review the system and sign off, as appropriate, at stages of development. This helps to ensure that the system will perform as intended and be accepted by the end users. Principal roles in the SDLC include the following: A. IT Steering Committee—Members of the committee are selected from functional areas across the organization, including the IT Department; the committee's principal duty is to approve and prioritize systems proposals for development. B. Lead Systems Analyst—The manager of the programming team: 1. Usually responsible for all direct contact with the end user 2. Often responsible for developing the overall programming logic and functionality C. Application Programmers—The team of programmers who, under direction of the lead analyst, are responsible for writing and testing the program. D. End Users—The employees who will use the program to accomplish their work tasks using the developed system: 1. Responsible for identifying the problem to be addressed and approving the proposed solution to the problem, often also work closely with programmers during the development process Stages in, and Risks to, the SDLC Method—Riskier systems development projects use newer technologies or have a poorly defined (i.e., sketchy) design structure. In the SDLC method, program development proceeds through an orderly series of steps. At the end of each step, all of the involved parties (typically the lead systems analyst, the end user, and a representative from the IT administration or the IT steering committee) sign a report of activities completed in that step to indicate their review and approval. The seven steps in the SDLC method are: Stage 1—Planning and Feasibility Study—When an application proposal is submitted for consideration, the proposal is evaluated in terms of three aspects: 1. Technical feasibility—Is it possible to implement a successful solution given the limits currently faced by the IT department? Alternatively, can we hire someone, given our budget, to build the system? 2. Economic feasibility—Even if the application can be developed, should it be developed? Are the potential benefits greater than the anticipated cost? 3. Operational feasibility—Given the status of other systems and people within the organization, how well will the proposed system work?After establishing feasibility, a project plan is developed; the project plan establishes: a. Critical success factors—The things that the project must complete in order to succeed. b. Project scope—A high-level view of what the project will accomplish. c. Project milestones and responsibilities—The major steps in the process, the timing of those steps, and identification of the individuals responsible for each step. Stage 2—Analysis—During this phase, the systems analysts work with end users to understand the business process and document the requirements of the system; the collaboration of IT personnel and end users to define the system is known as joint application development (JAD). 1. Requirements definition—The requirements definition formally identifies the tasks and performance goals that the system must accomplish; this definition serves as the framework for system design and development. a. All parties sign off on the requirements definition to signify their agreement with the project's goals and processes. Stage 3—Design—During the design phase, the technical specifications of the system are established; the design specification has three primary components: 1. Conceptual design—The first step of design process, called conceptual design, is to summarize the goal, structure, data flows, resource requirements, systems documentation, and preliminary design of the system. 2. Technical architecture specification—Identifies the hardware, systems software, and networking technology on which the system will run. 3. Systems model—Uses graphical models (flowcharts, etc.) to describe the interaction of systems processes and components; defines the interface between the user and the system by creating menu and screen formats for the entire system. Stage 4—Development—During this phase, programmers use the systems design specifications to develop the program and data files: 1. The hardware and IT infrastructure identified during the design phase are purchased during the development phase. 2. The development process must be carefully monitored to ensure compatibility among all systems components as correcting of errors becomes much costlier after this phase. Stage 5—Testing—The system is evaluated to determine whether it meets the specifications identified in the requirements definition. 1. Testing procedures must project expected results and compare actual results with expectations: a. Test items should confirm correct handling of correct data, and, data that includes errors. 2. Testing most be performed at multiple levels to ensure correct intra- and inter-system operation: a. Individual processing unit—Provides assurance that each piece of the system works properly. b. System testing—Provides assurance that all of the system modules work together. c. Inter-system testing—Provides assurance that the system interfaces correctly with related systems. d. User acceptance testing—Provides assurance that the system can accomplish its stated objectives with the business environment, and that users will use the delivered system. Stage 6—Implementation—Before the new system is moved into production, existing data must be converted to the new system format, and users must be trained on the new system; implementation of the new system may occur in one of four ways: 1. Parallel implementation—The new system and the old system are run concurrently until it is clear that the new system is working properly. 2. Direct cutover, "cold turkey," "plunge," or "big bang" implementation—The old system is dropped and the new system put in place all at once. This is risky but fast (except when it fails —in which case it is slower). 3. Phased implementation—Instead of implementing the complete system across the entire organization, the system is divided into modules that are brought on line one or two at a time. 4. Pilot implementation—Similar to phased implementation except, rather than dividing the system into modules, the users are divided into smaller groups and are trained on the new system one group at a time: Stage 7—Maintenance—Monitoring the system to ensure that it is working properly and updating the programs and/or procedures to reflect changing needs: 1. User support groups and help desks—Provide forums for maintaining the system at high performance levels and for identifying problems and the need for changes. 2. All updates and additions to the system should be subject to the same structured development process as the original program. Systems Development Failures—A recent survey indicates that companies complete about 37% of large IT projects on time and only 42% on budget. Why do systems projects so often fail? Common reasons include: 1. Lack of senior management knowledge of, and support and involvement in, major IT projects 2. Difficulty in specifying the requirements 3. Emerging technologies (hardware and software) that may not work as the vendor claims 4. Lack of standardized project management and standardized methodologies 5. Resistance to change; lack of proper "change management." Change management is integral to training and user acceptance 6. Scope and project creep. The size of the project is underestimated and grows as users ask "Can it do this?" 7. Lack of user participation and support 8. Inadequate testing and training. Training should be just-in-time (prior to use) and be at full-load service levels. 9. Poor project management—underestimation of time, resources, scope Accountant's Involvement in IS Development A. Accounting and auditing skills are useful in cost/benefit and life cycle cost analyses of IT projects. B. Possess combined knowledge of IT, general business, accounting, and internal control along with communication skills to ensure new system(s) meet the needs of the users C. Types of accountants who may participate in system development: system specialist, consultant, staff accountant, internal or independent auditor Alternative System Development Processes A. Smaller, more innovative projects may use more rapid iteration processes, such as: 1. Prototyping—An iterative development process focusing on user requirements and implementing portions of the proposed system. Prototypes are nothing more than "screenshots" that evolve, through iterations, into a functioning system. 2. Rapid application development (RAD)—An iterative development process using prototypes and automated systems development tools (i.e., PowerBuilder and Visual Basic) to speed and structure the development process. B. Modular development is an alternative model for project organization. In modular development, the system proceeds by developing and installing one subsystem (of an entire company system) at a time. Examples of modules might include order entry, sales, and cash receipts.

IT and Business Strategy

Strategy: The sequence of interrelated procedures for determining an entity's long-term goals and identifying the best approaches for achieving those goals Oversight: The process of managing and monitoring an organization's operations to achieve internal control and effectively manage risks The Problem of IT and Business Strategy A. Integrating IT investments into an organization's overall business strategy is an ongoing problem for several reasons, including: 1. Lack of strategic focus—Many IT investments are bottom-up projects. That is, they originate in business unit needs and may be undertaken without a sense of the organization's overall strategy 2. Lack of strategic investment—Because of the bottom-up nature of many IT investments, there is often an overinvestment in existing businesses and inadequate attention to transformative technologies that may be the future of the business. 3. Inadequate scope and agility—Because many IT investments are made in business units, and not at a corporate level, the projects may be too small in scope and inadequately scaled to meet changing business needs. The Changing Role of IT in Business Strategy A. Digitization (i.e., the movement of data to electronic form) and globalization (i.e., the integration of cultures and economies due to digitization) have made IT investments central to many businesses' strategy. However, not all of these investments have been effective. B. Reminder—The lessons on business strategy (see the "Generic Strategies" lesson) argue that there are two basic strategies: product differentiation and cost leadership. IT may influence these strategies in the following ways: C. Product Differentiation—This strategy involves setting your product apart from your competitors by offering one that's faster, has enhanced features, and so in. How can IT influence product differentiation? 1. The Internet, as a distribution channel, can create product differentiation. For example, eBay's success was partially built on its ability to create a new market of online products, many of which had been previously available only in specialized markets (e.g., antique Russian watches; all types of Beanie Babies) 2. Many advanced technologies (i.e., lasers, 3-D printers) improve product quality and create differentiation. 3. Many products are increasingly digitized (e.g., books, music) which can both increase quality and reduce costs. 4. Information on the Internet can be changed quickly (versus, e.g., sales obtained by mailed catalogs). Because of this, product life cycles are shorter and product evolution is faster. These changes can be used to create differentiation. D. Cost Leadership—A low-cost strategy involves offering a cheaper product than your competitors. The low cost is made possible by operating more efficiently. How can IT influence cost leadership? 1. Many companies now use advanced technologies to reduce their costs and improve the efficiency of production and delivery systems. 2. Because the Internet is available to almost everyone, intense price competition can result. The outcome may be that many companies shift away from a low-cost strategy toward a product-differentiation strategy.

ERM for Cloud Computing

1. A risk assessment and analysis must be done before contracting for cloud computing. a. Most organizations will include senior management and the IT steering committee in this analysis. If the risk is substantial, cloud computing should be a topic for a board of directors discussion. b. The risk analysis should consider the legal, regulatory, and operational risks of cloud computing. c. With the exception of an internal, private cloud, cloud computing is a type of IT outsourcing. The risk analysis must consider the increased inherent risk of outsourcing control over some portion of the organization's IT system. i. Public clouds contain higher inherent risk than do private clouds. B. ERM for cloud computing begins with clear objectives and a well-structured plan. 1. The cloud computing plan should include a strong cloud governance structure and reporting model, an assessment of internal IT skills, and a well-defined, entity risk appetite. C. Effective cloud solutions require considering and integrating: 1. The relevant business processes—For example, sales, product development, manufacturing, distribution, procurement, payroll, financing 2. The deployment model—For example, public, hybrid, private 3. The service delivery model—SAAS, PAAS, IAAS (see the introductory lesson on cloud computing for term definitions) II. Cloud Computing Risks and Responses—This section gives examples of important cloud risks, and, related responses and controls. A. Risk—Unauthorized Cloud Activity 1. Response—Preventive and detective controls related to unauthorized procurement and use of cloud services a. Examples i. A cloud use policy that articulates how, when, and for what uses cloud computing is allowed ii. A list of approved cloud vendors iii. A policy that identifies who responsible for authorizing and contracting for cloud services B. Risk—Lack of CSP Transparency 1. Response—Assessment of the CSP system of internal control a. For example, approved list of cloud vendors includes only vendors who provide sufficient information to enable informed risk assessments of the integrity of CSP operations b. For example, list of required information from CSP related to the type of service provided (i.e., IAAS, SAAS, PAAS) i. References for the vendor, information about appropriate usage, performance data, network infrastructure, data center, security, data segregation, and compliance policies ii. Vendor's suppliers, other "tenants" (shred users) of the cloud C. Risk—CSP Reliability and Performance 1. Response—Effective incident management plan and procedure a. For example, contract with backup CSPs in the event of a system failure with a primary CSP b. For example, implement CSP availability monitoring D. Risk—Cyber-Attack 1. Response—Incident management plan that considers increased likelihood of attack on CSP a. Store only nonessential, nonsensitive data on CSP solution. b. Deploy encryption on all cloud-hosted data. c. Contract with backup CSPs in anticipation of a hack on the primary CSP.

Multi-Location System Structure

I. Centralized, Decentralized, and Distributed Systems Organizations with multiple locations must address the problem of consolidating and coordinating data from the individual locations. The three systems approaches to this issue are as follows: A. Centralized systems Maintain all data and perform all data processing at a central location; remote users may access the centralized data files via a telecommunications channel, but all of the processing is still performed at the central location. 1. Advantages a. Better data security once the data is received at the central location b. Consistency in processing 2. Disadvantages a. High cost of transmittingLarge numbers of detailed transactions b. Input/output bottlenecksAt high traffic times (end of period) c. Slower responses in a timely mannerTo information requests from remote locations B. Decentralized systems Allow each location to maintain its own processing system and data files. In decentralized systems, most of the transaction processing is accomplished at the regional office, and summarized data is sent to the central office. For example, in payroll processing, the regional offices calculate time worked, gross pay, deductions, and net pay for each employee and transmit totals for salary expense, deductions payable, and cash paid to the central database. 1. Advantages a. Cost savings by reducing the volume of data that must be transmitted to the central location b. Reduction of processing power and data storage needs at the central site c. Reduce input/output bottlenecks d. Better responsiveness to local information needs 2. Disadvantages a. Greater potential for security violations because there are more sites to control b. Cost of installing and maintaining equipment in multiple locations C. Distributed database systems So named because rather than maintaining a centralized or master database at a central location, the database is distributed across the locations according to organizational and user needs. 1. Advantages a. Better communications among the remote locations because they must all be connected to each other in order to distribute the database b. More current and complete information c. Reduction or elimination of the need to maintain a large, expensive central processing center 2. Disadvantages a. Cost of establishing communications among remote locations b. Conflicts among the locations when accessing and updating shared data

Bitcoin and Blockchain

Bitcoin Introduction A. What Is bitcoin? 1. Bitcoin is an intangible asset—An intangible asset is an object that has value but not a physical form. Intangible assets include patents, trademarks, copyrights, goodwill (in some cases) and bitcoins. The intangible asset designation is the view of the IRS (i.e., that bitcoins are taxed as "property" not as currency). a. One difference in bitcoins from some intangible assets is that bitcoins can be bought, sold, and traded (in contrast, e.g., to goodwill). b. Because bitcoin is taxed as property, gains or losses are capital gains or losses. 2. Bitcoin is also "electronic cash" (but remember, bitcoin is taxed by the IRS as property not as a currency). However, unlike most cash, no central government or authority manages bitcoin. It is a peer-to-peer (i.e., decentralized) currency that relies on a database system (called blockchain) to authenticate and validate the audit trail and existence of bitcoins. 3. Bitcoin is a decentralized currency that is not under the control of a government, centralized authority, or financial institution. Bitcoin is the first and most popular "crypto-currency" (i.e., a currency that relies on encryption technology for validation and control). (But remember, bitcoin is taxed as property not as currency.) 4. Bitcoin is also a network, payment, and accounting system. To buy or sell bitcoins, to use them as payments, or to receive them as income, one must have a wallet (much like a bank account) and a connection to a bitcoin exchange. Blockchain Introduction A. What Is Blockchain? 1. Blockchain is a decentralized, distributed ledger. "Decentralized" and "distributed" mean that anyone in the peer-to-peer "network" (i.e., the people and machines that are allowed access to the ledger) can always log, view, and confirm its validity and accuracy. Simply stated, blockchain is an independent, secure, non-modifiable audit trail of transactions, collected into an open ledger database. It is also an encryption-secured, distributed database of transactions. 2. Blockchain was created as part of the invention of bitcoins to provide a secure, decentralized cryptocurrency tracking system. 3. A blockchain record is an electronic file that consists of "blocks," which document transactions. Each block includes a time and a link to a previous block using a unique code. 4. Because blockchain relies on decentralized users confirming one another's ledgers, it requires adoption by many users to be useful. Hence, blockchain is unlikely to transform business in the short term. 5. The security of blockchain depends on three factors: independent confirmation; asymmetric encryption; and cheap, fast computing capacity. Committing a fraud in the blockchain would require altering a block (i.e., a transaction record). Difficulties in committing this fraud include: a. This fraud would require changing all copies of the record across the entire distributed database. Anyone monitoring the fraudster's blockchain could identify the alteration and the fraud. Game over! b. Altering other copies (i.e., those other than the fraudster's) of the blockchain would require acquiring the private keys for all other copies of the blockchain. In a large distributed network, this would be impossible. c. Cheap, fast computing power makes it easy for networked computers to monitor for attempts to change the blockchain. Blockchain Applications, Risks, and Limitations A. Blockchain Applications (examples are from Blockchain Geeks, undated) 1. Smart contracts—Blockchain enables the enforcement of contracts through mutual block monitoring (e.g.., Ethereum). Imagine that a bank agrees to loan money to a company where the interest rates is tied to the company's financial ratios. Blockchain could make the company's financial data transparent to the bank by embedding the financial data in the blockchain. 2. Internet of Things (IoT)—One likely application of blockchain's "smart contracts" is the Internet of things. Imagine a restaurant supplier who wants to monitor and purchase only chickens that are treated humanely by a chicken farm. With IoT sensors installed on each chicken (monitoring its physiology) and blockchain, the supplier could accept or reject individual (!) chickens based on their treatment. 3. Open source payment—Why use financial institutions, and their fees, for payments when a verifiable, open-ledger network is available for these payments? Why not make payments directly between parties? Coming to an economy near you, and soon, are payment systems that rely on blockchain and its peer-to-peer system (e.g., OpenBazaar). 4. Financing and crowdfunding—The previous example also illustrates the use of blockchain for financing. Crowdfunding (e.g., Kickstarter and Gofundme) offers another opportunity for blockchain, including the possibility of crowdfunded start-up operations that are financed by investors investing through blockchain technology. 5. Corporate governance and financial reporting—Imagine a company whose financial records are continuously available to everyone, everywhere, through a blockchain (e.g., in the app Boardroom). Any recorded issue (is the company making illegal payments to vendors?) could be investigated by anyone with access to the blockchain. If all financial records are available always and everywhere, would we still need external auditors as monitors? 6. Supply chain auditing—Want to monitor whether the T-shirt you bought was made by child labor in Pakistan? Is the diamond your fiancé gave you a blood diamond? Go the company's blockchain and trace its supply chain to its source. The company Provenance offers this service. 7. Predictive analytics—Blockchain can be used to aggregate millions of users' expectations about an event. These predictions can improve forecasting of weather, business outcomes, sporting events, or elections. 8. Identify and access management—Blockchain ledgers can document characteristics of users that enable multifactor identification. With blockchain, users' might need only a single point of identification (e.g., a fingerprint) to link to their permanent record, which could be confirmed by any transaction identifier in the record. Netki is a start-up company that is working to implement such a system. Banks and law enforcement are using similar technologies as a part of anti-money laundering initiatives and "know your customer (KYC)" practices. 9. Auditing and monitoring—The importance of an unassailable transaction record to auditors is perhaps somewhat obvious. If auditors can begin with a transaction record that does not need to be audited, considerable time (and client money) will be saved in internal and external auditing.

HR and Payroll Cycle

Core Activities A. Recruiting and Hiring Employees B. Training and Developing Employees' Skills C. Supervision and Job Assignments 1. E.g., Timekeeping D. Salaries and Benefits 1. Compensation (payroll) 2. Employee benefits (e.g., medical, health, and life insurance, parking, fitness centers) 3. Taxes on wages and benefits (leading to withholdings and lots of accounting complexities) 4. Vacations and time off E. Monitoring and Evaluation 1. Performance evaluation F. Transitions 1. Discharging employees, due to voluntary or involuntary termination 2. Facilitating employee retirement Inherent Risks A. Fictitious or "ghost" employees don't just occur in ghost stories. Hiring people who don't exist can be a fraud perpetrated by another employee who most decidedly does exist and who is getting undeserved resources (money or assets). B. Many famous frauds have occurred when employees are terminated but remain on the payroll and their checks (or EFTs) are endorsed or received by mountebanks (i.e., people who trick others out of their money). C. There are, usually, very good controls over underpaying people—the people themselves! Very often, however, if we overpay people, they won't say anything, which is why controls relating to overpaying employees are much more important than controls related to underpaying people. Relevant Stakeholders A. Employees (obviously) 1. Establishing that employees exist and that their bills, paid salaries, and benefits are legitimate and accurate are important concerns. B. HR service providers, including health maintenance organizations, financial services companies, retirement companies, banks, credit unions, and life insurance companies C. Federal and state governments, to which wage-related taxes usually are owed. Failure to pay, and the theft of, payroll taxes is a common area of fraud in small organizations. Important Forms (in Electronic Systems), Documents (in Paper Systems), and Files A. See table later in the lesson. Accounting and Business Risks and Controls A. Recruiting and Hiring Employees 1. Hiring unqualified or illegal employees—Follow law, regulation, and organizational procedures for hiring. 2. Distinguish (according to law) employees from independent contractors. B. Training and Developing Employees' Skills 1. Retaining high-performing employees is critical to the success of most organizations (e.g., public accounting). Organizations need plans and policies to enable this. C. Supervision and Job Assignments 1. Timekeeping—Increasingly automated (e.g., scanned badge readers). Automating this function generally improves controls by reducing segregation of duties concerns. D. Salaries and Benefits 1. Compensation a. Payroll activities require: 1. Maintaining and updating master payroll file 2. Updating tax rates and deductions, (often) computing time and attendance 3. Preparing and distributing payroll 4. Calculating employer-paid benefits and taxes 5. Disbursing payroll taxes, deductions, and benefit costs b. Compensation may be based on hours, self-reports, commissions, fixed salaries, or bonuses. 2. Payroll accounting a. Often outsourced, which can be an effective segregation of duties in a critical area of control b. Should be paid by direct deposit as an accounting control and a convenience to employees (no checks, if possible). Issuing checks generally reduces accounting controls compared to direct deposits. c. Payroll department should validate and periodically monitor employee records. d. Control goals—All payroll transactions must have proper authorization, be valid, be recorded, and be accurate. 3. Segregation of duties a. If there is a low level of automation, try to segregate the following functions: timekeeping, payroll preparation, personnel management, and paycheck distribution. b. Generally, the more automation, the less the need for segregation of duties. c. Outsourcing payroll functions, in a smaller organization, can improve controls by increasing segregation of duties. E. Monitoring and Evaluation 1. Performance evaluation—Should be regular, systematic, unbiased, and follow organization's policies F. Transitions 1. Discharging employees—Due to voluntary or involuntary termination a. In some cases, must be "pack up your things and be gone in an hour" 2. Facilitating employee retirement a. Follow organization's policies and procedures, which, of course, indicates that the organization needs such policies.

Backup and Restoration

Formal plans for making and retaining backup copies of data to enable recovering from equipment failures, power failures, and data processing errors. 1. At least one archive should be off-site so that recovery is possible even when a major disaster occurs. 2. Controls over storage libraries mirror those for data processing sites. 3. Decisions regarding backup devices and media should consider vendor (outsourcing) availability, standardization, capacity, speed, and price. 4. Backup procedures may be full (all data), incremental (data changed from a certain time) or differential (data changed since the last full backup). 5. Inventory procedures—An inventory of backups, on and off-site must be maintained that includes, at a minimum, data set name, volume serial number, data created, accounting period, and storage location (e.g., bin). 6. Many organizations outsource responsibility for backup and restoration. Organizations that do not outsource generally purchase software systems that help manage these processes. 7. Restoration procedures should be integrated into the organization's continuity plan. 8. Backup and restoration procedures must be regularly tested and reviewed. "Grandfather, father, son" system A traditional term used to refer to a three-generation backup procedure: the "son" is the newest version of the file; the "father" is one generation back in time, the "grandfather" is two generations back in time; Note While studying for the CPA Exam, please know the following terms and their "value" designations: System backup is good; data redundancy is bad. (Of course, system backup is data redundancy, but don't get philosophical about this topic until AFTER you pass the CPA Exam.) Checkpoint and restart—Common in batch processing systems, a checkpoint is a point in data processing where processing accuracy is verified; if a problem occurs, one returns to the previous checkpoint instead of returning to the beginning of transaction processing. This saves time and money. Rollback and recovery—Common to online, real-time processing; all transactions are written to a transaction log when they are processed; periodic "snapshots" are taken of the master file; when a problem is detected the recovery manager program, starts with the snapshot of the master file and reprocesses all transactions that have occurred since the snapshot was taken. Network-Based Backup and Restoration—Network capabilities increasingly provide continuous backup capabilities; such backup facilities are necessary to create fault-tolerant systems (systems that continue to operate properly despite the failure of some components) and high-availability clusters (HACs). HACs are computer clusters designed to improve the availability of services; HACs are common in e-commerce environments where services must be continuously available. 1. Remote backup service (online backup service) a. An outsourcing service that provides users with an online system for backing up and storing computer files. b. Remote backup has several advantages over traditional backup methodologies: the task of creating and maintaining backup files is outsourced; the backups are off-site; some services can operate continuously, backing up each transaction as it occurs. 2. RAID—RAID (redundant array of independent disks; originally redundant array of inexpensive disks) stores the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in a balanced way, improving performance. Since the use of multiple disks lessens the likelihood of failures, storing data redundantly reduces the risk of system failure. 3. Storage Area Networks (SANs)—Replicate data from and to multiple networked sites; data stored on a SAN is immediately available without the need to recover it; this enables a more effective restoration but at a relatively high cost. 4. Mirroring a. Maintaining an exact copy of a data set to provide multiple sources of the same information b. Mirrored sites are most frequently used in e-commerce for load balancing—distributing excess demand from the primary site to the mirrored. c. A high-cost, high-reliability approach that is common in e-commerce.

Mobile Device, End-User, and Small Business Computing

I. End-User and Small Business Computing—The creation, implementation, and control of systems by end users and in small businesses, can increase control risks compared with systems created and operated by IT professionals in larger, traditional IT environments. End-user and small organizational computing carry unique risks. Though these risks cannot be eliminated, strong compensating controls can substantially improve organizational security and control. Characteristics of Small Business Environments A. Microcomputers linked to networks are used almost exclusively. B. IT is Outsourced—There is no centralized information technology department. C. Because there are too few individuals to provide for segregation of duties (in end-user environments, there is usually only a single individual), incompatible functions are frequently combined. It is critical to effective control that the functions of authorization, custody of assets, and record keeping be separated. If essential, the duties of authorization and review/auditing may be combined. III. Specific Risks and Controls Related to Small-Organizational Computing A. Physical Access—Because personal computers are often found in openly available areas, care should be taken to make sure that doors are locked when offices are open and that removable storage devices (diskettes, CDs, DVDs, flash drives, etc.) are stored in secure locations. B. Logical Access—All machines should require a username and password in order to access the system and should be set to automatically log out of the system when they have not been used for a period of time; networked systems should protect all network available resources from unauthorized access. C. Data Backup Procedures—Company-wide standards for backing up files should be established and enforced; if possible, this process should be centralized and automated through a network; off-site backups must be maintained on an ongoing basis. D. Program Development and Implementation—User-developed programs—which include spreadsheets and databases—should be subject to third-party review and testing to ensure that they operate as expected; copies of the authorized versions of these programs should be separately cataloged and maintained in a secure location. E. Data Entry and Report Production—Since it is common for a single individual to be responsible for all aspects of a transaction, all work should be regularly reviewed by an independent third party. Managing Mobile Devices—The use of mobile computing devices, including iPhones, Androids, and tablet computers (e.g., the iPad and Galaxy) is now ubiquitous. These devices can enhance individual and organizational productivity but also present unique and formidable challenges to IT security and control. Use of these devices is quite recent. BlackBerry devices originated in the early 2000s while the first iPhone was introduced in 2007. Recent devices blur the boundary between mobile devices and computers. A. Risk and Security Challenges 1. Malicious applications—Mobile devices are susceptible to malicious applications that contain hidden functionalities to collect and transmit user data to third parties. While only a few examples exist of the successful exploitation of these vulnerabilities, organizations must proactively manage data security to meet these challenges by monitoring emerging security threats in mobile computing. To do otherwise is to allow mobile devices to become electronic Trojan horses that infiltrate the organization and can harvest and transmit data on operations to hackers, crackers, and spies. 2. Loss and theft—The ubiquity and portability of mobile devices makes them particularly vulnerable to loss or theft. If a user loses a mobile device that links to an organizational system, and the device is not password protected, then system capabilities must enable blocking the device from accessing organizationally sensitive systems. 3. Restricting access and permission rights—Because of the increased risks of loss or theft of mobile devices, it may be desirable to allow users fewer access and permission rights on mobile devices than on desktop devices, or laptops that remain on organizational property. For example, users may be permitted to change some files on desktop devices and to view (but not change) these files on mobile devices (called view-only access). B. Benefits—Mobile devices enable ubiquitous computing, including integration with enterprise-wide systems and cloud-based storage vendors. Enabling users to connect—using hand-held, mobile devices—to enterprise-wide systems allows for data capture and transmission at point of origin, which can significantly improve information quality. C. User and Usability Challenges—Redesigning organizational information displays to fit on hand-held, mobile devices poses formidable design challenges. Additional challenges include changing functionalities in the mobile environment (e.g., browser and JavaScript availability), the speed of the mobile network, the computation load of encryption, and user input from a tiny keyboard or from a voice-driven application (e.g., Siri on the iPhone). Such challenges can include requiring a user to type a complex, long password on a tiny touch keyboard where system lockout occurs after two mistaken password entries. D. User Training—Mobile security training is essential to effective security awareness. Training programs must teach users organizational policies on the use of mobiles, password maintenance procedures, when and how to use (and not use) mobile devices, and procedures in the event a device is lost or stolen. Authorization is most likely to be absent in a small business computing environment. There is a great need for third-party review and testing within the small business computing environment.

Organizational Continuity Planning and Disaster Recovery

I. Organizational (Business) Continuity Planning—The disaster recovery plan relates to organizational processes and structures that will enable an organization to recover from a disaster.Business (or organizational) continuity management (sometimesabbreviated BCM) is the process of planning for such occurrences and embedding this plan in an organization's culture. Hence, BCM is one element of organizational risk management. It consists of identifying events that may threaten an organization's ability to deliverproducts and services, and creating a structure that ensures smooth and continuous operations in the event the identified risks occur. One six-step model of this process (from the Business Continuity Institute) is A. Create a BCM Policy and Program—Create a framework and structure around which the BCM is created. This includes defining the scope of the BCM plan, identifying roles in this plan, and assigning roles to individuals. B. Understand and Evaluate Organizational Risks—Identifying the importance of activities and processes is critical to determining needed costs to prevent interruption, and, ensure their restoration in the event of interruption. A business impact analysis (BIA) will identify the maximum tolerable interruption periods by function and organizational activity. C. Determine Business Continuity Strategies—Having defined the critical activities and tolerable interruption periods, define alternative methods to ensure sustainable delivery of products and services. Key decisions related to the strategy include desired recovery times, distance to recovery facilities, required personnel, supporting technologies, and impact on stakeholders. D. Develop and Implement a BCM Response—Document and formalize the BCM plan. Define protocols for defining and handling crisis incidents. Create, assign roles to, and train the incident response team(s). E. Exercise, Maintain, and Review the Plan—Exercising the plan involves testing the required technology, and implementing all aspects of the recovery process. Maintenance and review require updating the plan as business processes and risks evolve. F. Embed the BCM in the Organization's Culture—Design and deliver education, training and awareness materials that enable effective responses to identified risks. Manage change processes to ensure that the BCM becomes a part of the organization's culture. G. The following figure illustrates the prioritization of BCP risks by the importance of the function to the organization's mission. Risk prioritization would be part of the second phase of BCM. Disaster Recovery Plans (DRPs) A. DRPs enable organizations to recover from disasters and continue operations. They are integral to an organization's system of internal control. DRP processes include maintaining program and data files and enabling transaction processing facilities. In addition to backup data files, DRPs must identify mission-critical tasks and ensure that processing for these tasks can continue with virtually no interruptions, at an affordable cost. 1. Examples of natural disasters include fires, floods, earthquakes, tornadoes, ice storms, and windstorms. Examples of human-induced disasters include terrorist attacks, software failures (e.g., American Airlines recent flight control system failure), power plant failures, and explosions (e.g., Chernobyl), chemical spills, gas leaks, and fires. B. Two Important Goals of Disaster Recovery Planning 1. The recovery point objective (RPO) defines the acceptable amount of data lost in an incident. Typically, it is stated in hours, and defines the regularity of backups. For example, one organization might set an RPO of one minute, meaning that backups would occur every minute, and up to one minute of data might need to be re-entered into the system. Another organization, or the same organization in relation to a less mission-critical system, might set an RPO of six hours. 2. The recovery time objective (RTO) defines the acceptable downtime for a system, or, less commonly, of an organization. It specifies the longest acceptable time for a system to be inoperable. C. Disaster recovery plans are classified by the types of backup facilities, the time required to resume processing, and the organizational relations of the site: 1. Cold site ("empty shell")—An off-site location that has all the electrical connections and other physical requirements for data processing, but does not have the actual equipment or files. Cold sites often require one to three days to be made operational. A cold site is the least expensive type of alternative processing facility available to the organization. If on a mobile unit (e.g., a truck bed), it is called a mobile cold site. 2. Warm site—A location where the business can relocate to after the disaster that is already stocked with computer hardware similar to that of the original site, but does not contain backed-up copies of data and information. If on a mobile unit, it is called a mobile warm site. 3. Hot site a. An off-site location completely equipped to quickly resume data processing. b. All equipment plus backup copies of essential data files and programs are often at the site. c. Enables resumed operations with minimal disruption, typically within a few hours. d. More expensive than warm and cold sites. 4. Mirrored site—Fully redundant, fully staffed, and fully equipped site with real-time data replication of mission-critical systems. Such sites are expensive and used for mission-critical systems (e.g., credit card processing at VISA and MasterCard). 5. Reciprocal agreement—An agreement between two or more organizations (with compatible computer facilities) to aid each other with data processing needs in the event of a disaster. Also called a "mutual aid pact." May be cold, warm, or hot. 6. Internal site—Large organizations (e.g., Walmart) with multiple data processing centers often rely upon their own sites for backup in the event of a disaster.

Processing, File, and Output Controls

Input and origination controls—Control over data and origination entry process Processing and file controls—Controls over processing and files, including the master file update process Output controls—Control over the production of reports Exam Tip When answering questions about application controls, an important determinant of the correct answer is the processing method. Processing Controls—Controls designed to ensure that master file updates are completed accurately and completely. Controls also serve to detect unauthorized transactions entered into the system and maintain processing integrity. A. Run-to-Run Controls—Use comparisons to monitor the batch as it moves from one programmed procedure (run) to another; totals of processed transactions are reconciled to batch totals—any difference indicates an error. Also called "control totals."3 B. Internal Labels ("Header" and "Trailer" Records)—Used primarily in batch processing, electronic file identification allows the update program to determine that the correct file is being used for the update process. C. Audit Trail Controls—Each transaction is written to a transaction log as the transaction is processed; the transaction logs become an electronic audit trail allowing the transaction to be traced through each stage of processing; electronic transaction logs constitute the principal audit trail for online, real-time systems. Types of Files A. Accounting systems typically include the following four file types: 1. Master files are updated by postings to transaction files. 2. Standing data is a subcategory of master file that consists of infrequently changing master files (e.g., fixed assets, supplier names and addresses) 3. Transaction files are the basis for updating master files. 4. System control parameter files determine the workings, including error characteristics and bounds, of system runs. B. A primary goal of data control is to ensure that access, change, or destruction of data and storage media is authorized. File Controls—Some additional file controls that are not discussed in other lessons include: A. Parity Check (Parity Bit)—A zero or 1 included in a byte of information that makes the sum of bits either odd or even; for example, using odd parity, the parity check bit for this byte of data: Parity checks are one application of "check digits" (i.e., self-confirming numbers). B. Read after Write Check—Verifies that data was written correctly to disk by reading what was just written and comparing it to the source. C. Echo Check—Verifies that transmission between devices is accurate by "echoing back" the received transmission from the receiving device to the sending unit. D. Error Reporting and Resolution—Controls to ensure that generated errors are reported and resolved by individuals who are independent of the initiation of transactions (segregation of duties). E. Boundary Protection—Sort of a computer traffic cop. When multiple programs and/or users are running simultaneously and sharing the same resource (usually the primary memory of a CPU), boundary protection prevents program instructions and data from one program from overwriting the program instructions or data from another program. F. Internal Labels ("Header" and "Trailer" Records)—Used primarily in batch processing, electronic file identification allows the update program to determine that the correct file is being used for the update process. Read by the system. Very important for removable storage. G. External Labels—Labels on removable storage that are read by humans. H. Version Control—Procedures and software to ensure that the correct file version is used in processing (e.g., for transaction files). I. File Access and Updating Controls—These controls ensure that only authorized, valid users can access and update files. J. Output Controls—Ensure that computer reports are accurate and are distributed only as authorized. 1. Spooling (print queue) controls—Jobs sent to a printer that cannot be printed immediately are spooled—stored temporarily on disk—while waiting to be printed; access to this temporary storage must be controlled to prevent unauthorized access to the files. 2. Disposal of aborted print jobs—Reports are sometimes damaged during the printing or bursting (i.e., separation of continuous feed paper along perforation lines) process; since the damaged reports may contain sensitive data, they should be disposed of using secure disposal techniques. 3. Distribution of reports—Data control is responsible for ensuring that reports are maintained in a secure environment before distribution and that only authorized recipients receive the reports; a distribution log is generally maintained to record transfer of the reports to the recipients. 4. End user controls—For particularly critical control totals, or where end users have created systems, perform checks of processing totals and reconciling report totals to separately maintained records. This is also sometimes called one-to-one checking. 5. Logging and archiving of forms, data and programs—Should be in a secure, off-site location. 6. Record retention and disposal—This is discussed in the "Program Library, Documentation, and Record Management" lesson.

Input and Origination Controls

Input and origination controls—Control over data entry and data origination process Processing and file controls—Controls over processing and files, including the master file update process Output controls—Control over the production of reports Input Controls—(Also known as programmed controls, edit checks, or automated controls.) Introduction—Ensure that the transactions entered into the system meet the following control objectives: 1. Valid—All transactions are appropriately authorized; no fictitious transactions are present; no duplicate transactions are included. 2. Complete—All transactions have been captured; there are no missing transactions. 3. Accurate—All data has been correctly transcribed, all account codes are valid; all data fields are present; all data values are appropriate. Here are some important input controls: 1. Missing data check—The simplest type of test available: checks only to see that something has been entered into the field. 2. Field check (data type/data format check)—Verifies that the data entered is of an acceptable type—alphabetic, numeric, a certain number of characters, etc. 3. Limit test—Checks to see that a numeric field does not exceed a specified value; for example, the number of hours worked per week is not greater than 60. There are several variations of limit tests: a. Range tests—Validate upper and lower limits; for example, the price per gallon cannot be less than $4.00 or greater than $10.00. b. Sign tests—Verify that numeric data has the appropriate sign (positive or negative); for example, the quantity purchased cannot be negative. 4. Valid code test (validity test)—Checks to make sure that each account code entered into the system is a valid (existing) code; this control does not ensure that the code is correct, merely that it exists. a. In a database system, this is called referential integrity (e.g., an important control to prevent the creation of fake entities, vendors, customers, employees). 5. Check digit—Designed to ensure that each account code entered into the system is both valid and correct. The check digit is a number created by applying an arithmetic algorithm to the digits of a number, for example, a customer's account code. The algorithm yields a single digit appended to the end of the code. Whenever the account code (including check digit) is entered, the computer recalculates the check digit and compares the calculated check digit to the digit entered. If the digits fail to match, then there is an error in the code, and processing is halted. a. A highly reliable method for ensuring that the correct code has been entered b. A parity check (from the "Processing, File, and Output Controls" lesson) is one form of a check digit 6. Reasonableness check (logic test)—Checks to see that data in two or more fields is consistent. For example, a rate of pay value of "$3,500" and a pay period value of "hourly" may be valid values for the fields when the fields are viewed independently; however, the combination (an hourly pay rate of $3,500) is not valid. 7. Sequence check—Verifies that all items in a numerical sequence (check numbers, invoice numbers, etc.) are present. This check is the most commonly used control for validating processing completeness. 8. Key verification—The rekeying (i.e., retyping) of critical data in the transaction, followed by a comparison of the two keyings. For example, in a batch environment, one operator keys in all of the data for the transactions while a second operator rekeys all of the account codes and amounts. The system compares the results and reports any differences. Key verification is generally found in batch systems, but can be used in online real-time environments as well. As a second example, consider the process required to change a password: enter the old password, enter the new password, and then re-enter (i.e., key verify) the new password. This is a wasteful procedure that we all hope dies soon. 9. Closed loop verification—Helps ensure that a valid and correct account code has been entered; after the code is entered, this system looks up and displays additional information about the selected code. For example, the operator enters a customer code, and the system displays the customer's name and address. Available only in online real-time systems. 10. Batch control totals—Manually calculated totals of various fields of the documents in a batch. Batch totals are compared to computer-calculated totals and are used to ensure the accuracy and completeness of data entry. Batch control totals are available, of course, only for batch processing systems or applications. a. Financial totals—Totals of a currency field that result in meaningful totals, such as the dollar amounts of checks. (Note that the total of the hourly rates of pay for all employees, e.g., is not a financial total because the summation has no accounting-system meaning.) b. Hash totals—Totals of a field, usually an account code field, for which the total has no logical meaning, such as a total of customer account numbers in a batch of invoices. c. Record counts—Count of the number of documents in a batch or the number of lines on the documents in a batch. 11. Preprinted forms and preformatted screens—Reduce the likelihood of data entry errors by organizing input data logically: when the position and alignment of data fields on a data entry screens matches the organization of the fields on the source document, data entry is faster, and there are fewer errors. 12. Default values—Pre-supplied (pre-filled) data values for a field when that value can be reasonably predicted; for example, when entering sales data, the sales order date is usually the current date; fields using default values generate fewer errors than other fields. 13. Automated data capture—Use of automated equipment such as bar code scanners to reduce the amount of manual data entry; reducing human involvement reduces the number of errors in the system.

Transaction Processing

Manual Processing of Accounting Information—The steps in the classic manual accounting process model are as follows: A. A business transaction occurs and is captured on a source document. B. Data from the source document is recorded chronologically in a journal (journalizing): 1. The journal records the complete accounting transaction—the debit and the credit. C. Individual debits and credits are copied from the journal to the ledgers (posting); all transactions are posted to the general ledger, and many are also posted to a subsidiary ledger. 1. The general ledger—Classifies transactions by financial statement accounts (cash, inventory, accounts payable, sales revenue, supplies expense, etc.). 2. The subsidiary ledgers (subledgers)—Classify transactions by alternative accounts (e.g., customer accounts, vendor accounts, product accounts). Not all transactions are posted to subledgers: each subledger corresponds to a single general ledger account, and only transactions that affect that account are posted in the subledger. Examples of subledgers include: a. A/R subledger—Classifies A/R transactions (credit sales and customer payments) by customer b. A/P subledger—Classifies A/P transactions (credit purchases and payments to vendors) by vendor c. Inventory subledger—Classifies inventory transactions (product purchases and product sales) by product D. The ledgers are used to produce summarized account reports: 1. The general ledger produces the trial balance and financial statements. 2. The subsidiary ledgers produce reports consistent with their content (customer A/R balances, vendor A/P balances, etc.). III. Computerized Processing of Accounting Information—In mostly automated accounting systems, transaction data is captured and recorded chronologically; it is then reclassified and summarized by account; finally, the account summaries are used to produce financial statements and other reports. Files used to record this information correspond roughly to journals and ledgers. However, these activities occur much more quickly in automated than in manual systems. a. Data Entry/Data Capture—When a transaction occurs, the data may be manually recorded on a physical source document and then keyed into the system, or the data may be captured electronically using automated data capture equipment such as bar code readers. 1. The transaction data is recorded in a transaction file: a. Transaction files—In a computerized environment, they are equivalent to journals in a manual environment. b. Transaction files are temporary files—Data in the transaction files is periodically purged from the system to improve system performance. B. Master File Update—Data from the transaction files is used to update account balances in the master files. For example, the data from recording a utilities bill payment would be used to increase the balance of the utilities expense account and decrease the balance of the cash account in the general ledger master file. 1. Master files are used to maintain transaction totals by account: a. Master files—In a computerized environment, they are equivalent to ledgers in a manual environment. b. The general ledger and the subsidiary ledgers are all examples of master files. c. Master files are permanent files. The individual account balances change as transactions are processed but the accounts and master files themselves are never deleted. C. System Output—The master file account balances are used to produce most reports. 1. The general ledger master file is used to produce the financial statements. IV. Processing Methods—The processing method refers to the way computerized systems capture data and update the master file. Two principal methods are employed: A. Batch Processing—Batch processing is a periodic transaction processing method in which transactions are processed in groups: 1. Input documents are collected and grouped by type of transaction. These groups are called "batches." Batches are processed periodically (i.e., daily, weekly, monthly, etc.). 2. Batch processing is accomplished in four steps: Step 1: Data entry: The transactions data is manually keyed (usually) and recorded in a transactions file. Step 2: Preliminary edits: The transaction file data is run through an edit program that checks the data for completeness and accuracy; invalid transactions are corrected and re-entered. Step 3: Sorting: The edited transaction file records are sorted into the same order as the master file. Step 4: Master file update: The individual debits and credits are used to update the related account balance in the general ledger master file and, if appropriate, in the subsidiary ledger master file. 3. Batch controls—One or more batch control totals is usually calculated for each batch. a. The manually calculated batch control total is compared to computer-generated batch control totals as the batch moves through the update process. b. Differences between the two control totals indicate a processing error. 4. Batch processing is a sequential processing method—transactions are sorted in order to match the master file being updated. a. In some situations, sequential transaction processing can dramatically improve transaction processing efficiency. 5. Time lags—An inherent part of batch processing: There is always a time delay between the time the transaction occurs, the time that the transaction is recorded, and the time that the master file is updated. Thus, under batch processing: a. The accounting records are not always current. b. Detection of transaction errors is delayed. 6. Batch processing is appropriate when: a. Transactions occur periodically (e.g., once a week, once a month, etc.). b. A significant portion of the master file records will be updated. c. Transactions are independent (e.g., no other time-critical activities depend on the transaction in question). B. Online, Real-Time (OLRT) Processing—OLRT is a continuous, immediate transaction processing method in which transactions are processed individually as they occur. 1. In OLRT processing, transactions are entered and the master files updated as transactions occur. a. Requires random access devices such as magnetic disk drives to process transactions. 2. Each transaction goes through all processing steps (data entry, data edit, and master file update) before the next transaction is processed. Thus, under OLRT processing: a. The accounting records are always current. b. Detection of transaction errors is immediate. 3. Because transactions are processed as they occur, OLRT systems generally require a networked computer system to permit data entered at many locations to update a common set of master files; this means that OLRT systems are more expensive to operate than batch systems. 4. OLRT systems are desirable whenever: a. It is critical to have very current information. b. Transactions are continuous and interdependent as, for example, when a sales order is received: sales orders are received continuously and, once approved, cause other activities to occur (e.g., picking the goods in the warehouse, shipping the goods to the customer, invoicing the customer). Point-of-Sale (POS) Systems—POS systems are one of the most commonly encountered data capture systems in the marketplace today. POS systems combine online, real-time processing with automated data capture technology, resulting in a system that is highly accurate, reliable, and timely.

Data, Meta-data, and Data Integrity

Metadata is "a set of data that describes and gives further detail about a dataset." III. Criteria for Describing Data A. The AICPA (2020) lists three criteria for defining, documenting, and evaluating a dataset. These criteria specify the metadata that should be included with a dataset. B. Criterion 1—The description includes the dataset's purpose. 1. The most important element of metadata is the purpose, or intended uses, of a dataset. Providing information about a dataset's purpose allows users to determine whether the data will be useful to their specific need. 2. Data may be collected for one specific purpose (e.g., evaluating the performance of one component of a manufacturing machine). Or data may be collected for multiple or dissimilar purposes (e.g., Facebook activity data is used for many purposes, including social science and marketing research, by marketers to identify potential customers, by political campaigns, and, obviously, by Facebook to manage and update its website). 3. Some examples of datasets and their purpose(s): a. US Census Data supports the US Congress in determining the allocation of seats in the House of Representatives. And, of course, this data is repurposed for many other uses. b. Automobile dealer inventory data helps auto manufacturers choose production volumes and manage pricing and marketing strategies. C. Criterion 2—The description of the set of data is complete and accurate; it includes the ten elements or fields listed below. An acronym for these elements or fields is: PURPS STUFF where: P(opulation) U(nits) R(ecords) P(recision) S(ample) S(ources) T(ime) U(ncertainty) F(ields) F(ilters) 1. The population and sample of events or instances a. In statistics, the population is the group of units that we want to describe or predict. A sample is a group of units that we have chosen to represent the population. b. Historically, samples were much smaller than populations. Now, however, with the rise of big data, we often take samples that are the entire population (as discussed in the "Big Data" lesson). c. Even in a big data sample, the sample may miss some members of the population. For example, imagine that an auditor wants to take a sample of the population of all systems activity for a client's ERP system for the year. However, system log data for several time periods is missing from the sample because of computer hardware failures. Hence, in this case, the sample would be smaller than the population. The auditor would then need to assess whether the sample is still usable for achieving the specified audit objective. 2. The nature of each element: fields and records. a. The "Data Structures, Software, and Databases" lesson describes a field as "a group of characters (bytes) identifying a characteristic of an entity. A data value is a specific value found in a field. Fields can consist of a single character (Y, N) but usually consist of a group of characters. Each field is defined as a specific data type. Date, Text, and Number are common data types." b. The same lesson describes a record as "a group of related fields (or attributes) describing an individual instance of an entity (a specific invoice, a particular customer, an individual product)." c. Example of a field: The data field "attendance at a ball game" might consist of the number of tickets sold, people entering the stadium (recorded by a scanner or turnstiles), or all people at the stadium who are not employees. The attendance reported will depend on how attendance is defined and reported. d. Example of a field: The data field "size of a warehouse" might be measured as (a) the number of square feet in the warehouse floor used for products for resale, (b) a certain type of inventory item that is stored in this space, or (c) divisible storage areas within the warehouse. The measure used for the data field should result from, to the extent possible, the purpose of collecting the data. 3. The sources of the data a. Identifying the source of the data helps in assessing its reliability and credibility. For example, data provided by client management whom the auditor deems less than reliable is likely to receive greater scrutiny than data provided by client management that is deemed trustworthy. b. The identification of the source of the data should be sufficiently specific to allow for reproducibility. That is, a user should be able to take the description of the data source and, assuming access to the source, reproduce or re-create the data as provided. c. In addition, where data has been manipulated or transformed, it can be helpful to know the data preparation and cleaning (i.e., ETL) process that gave rise to the data. For more discussion of the ETL process, please see the "Data Analytics" lesson. 4. The units of measurement of data elements a. Measures of units can differ. For example, currency can be measured in dollars, British pounds, euros, and others. Even within a currency, the units of measure can be in hundreds, thousands, millions, and so on. Unless they are obvious, the units of measure of fields should be specified. 5. The accuracy, correctness, or precision of measurement a. There are limits on the accuracy or precision of most measures. For example, when conducting an analytical review, an account balance that is accurate to the nearest $1,000 is likely to be sufficiently accurate. In contrast, when conducting an account reconciliation, account balances to the second decimal (i.e., to the penny) are required. Hence, the accuracy or precision of a measurement can vary with the purpose of goal of the analysis. 6. The uncertainty or confidence interval inherent in each data element and in the population of those elements a. Indicators of variability measure how spread out a dataset is. For example, if all students taking a test get an A grade then there will be no variability in the test scores. In contrast, if an equal number of students get As, Cs, and Fs, there will be high variability in the test scores. b. Some measures of uncertainty or variability include: i. The standard deviation is a standardized measure of dispersion (variation) in a variable (or field). It is discussed in the "Statistics in Data Analysis" lesson. ii. Measures of historical variability, such as the highest and lowest temperatures ever recorded at a locationThe margin of error in polling data. To quote from the Pew Research Center (Mercer 2016): " iii. The margin of error describes how close we expect a survey result to fall relative to the true population value. For example, a margin of error of plus or minus 3 percentage points at the 95% confidence level means that if we fielded the same survey 100 times, we would expect the result to be within 3 percentage points of the true population value 95 of those times." 7. The time period(s) over which the set of data was measured or the period during which the events the data relates to occurred a. Identifying the time period of the data is critical to measurement and use of data. For example, if you have accounts receivable transaction data for 300 instead of 365 days out of the year, the obvious question is: "How and when can I get the other days' data?" 8. filters and other factors a. If the dataset is filtered, then the criteria used to determine inclusion or exclusion of items in the dataset should be identified. 9. For some data, additional data specification may be useful. For example, it may be useful to identify the ownership of the data, its classification according to security and privacy (e.g., if data is covered by legal constraints), access privileges, version number, and retention and disposition requirements. Some data specification requirements may result from a need to comply with generally accepted privacy principles (GAPP), which are discussed in the "IT Security Principles" lesson. D. Criterion 3—The data description identifies information that has not been included within the set of data or description but is necessary to understand each data element and the population. See the following figure for examples of data descriptions. 1. For example, some data requires special knowledge. When the specialized knowledge needed to use or understand a dataset may not be obvious to users, the dataset provider should indicate or include such information. For example: a. Users of company financial statements must have some understanding of generally accepted accounting principles (GAAP) to use financial statements. For example, users need to know the accounting equation and some specialized financial language (e.g., earnings before interest and taxes [EBIT]) to use financial statements. b. If the US census data did not include the definition of a household, information about where and how to locate this metadata should be provided, since these data are arranged by households. c. A dataset describing a company's crude oil inventory must include data about American Petroleum Institute gravity definitions of each grade held. (You do not need this for the CPA exam, but petroleum gravity definitions indicate how heavy a grade of oil is compared with water.) The confidence interval is a range of values within which, with a specific degree of uncertainty, the true value lies. Hence, it is a measure of uncertainty.

Data Governance and Data Management

The Business and Its Data A. Because of the data analytics revolution, data management and governance have emerged as key enablers of business success. The goal of these efforts is to create or enable turning a data lake (i.e., an unfiltered pool of big data) into a data warehouse (i.e., a structured, filtered data repository for solving business problems). Stage 1—Establish a Data Governance Foundation A. Creating a foundation for data governance enables addressing the legal, intellectual property, and customer privacy and security issues that arise with data use and storage. ZA management expects the following benefits from establishing a foundation for data governance: 1. Define and manage data as an asset. 2. Define data ownership, stewardship and custodianship roles and responsibilities. 3. Implement data governance. B. Data governance will be designed to answer the following questions: 1. WHAT data does ZA have, need, and use? 2. WHAT data governance practices exist in the ZA data life cycle? 3. WHO is responsible for ZA's data governance? 4. HOW will data be managed in ZA? C. WHAT—Data Classification and Data Taxonomy 1. Data classification defines the privacy and security properties of data. For example, Figure shows (on the horizontal axis) data classifications of public, internal, confidential, and sensitive. Complete data classification would also consider applicable laws and regulations (e.g., Health Insurance Portability and Accountability Act [HIPPA] requirements, and the General Data Protection Regulation [GDPR]—the European Union's general privacy and security law). 2. The data taxonomy categorizes the data within the organization's structure and hierarchy. D. WHEN—The Data Life Cycle—Mapping Data Governance Activities 1. The data life cycle overviews the steps in managing and preserving data for use and reuse. The figure below summarizes this model, which is like the system development life cycle in the "System Development and Implementation" lesson. By standardizing their use of the data life cycle model, organizations increase the likelihood that data will be usable and long-lived. E. WHO—The Data Governance Structure and Data Stewardship ZA's growth through acquisitions has created inconsistent business processes and multiple, unconnected datasets. Because of this, ZA created an oversight data governance organizational structure to direct, evaluate, and monitor data governance issues. This organizational structure helps ensure that data assets are complete, accurate, and in compliance with internal policies and external regulations. This governance structure consists of a set of data governance committees. F. As a part of this governance structure, ZA defined three key data roles: 1. Data owner—A senior-level, strategic oversight role a. Responsible for major data decisions and the overall value, risk, quality and utility of data. 2. Data steward—A tactical role a. Ensures that data assets are used and compliant. Facilitates consensus about data definitions, quality, and use. b. May be an individual or a group. 3. Data custodian—An IT operational role a. Ensures that data-related IT controls are implemented and operating. b. Implements IT capabilities and manages the IT architecture. c. May be an individual or a group. 4. The next figure is a RACI chart that illustrates the data stewardship roles of the data owner, steward, and custodian across the data life cycle. a. RACI is an acronym that stands for: i. Responsible—Does the work to complete the task. ii. Accountable—Delegates the work and is the last one to review the task or deliverable before completion. iii. Consulted—Deliverables are strengthened by review and consultation from multiple team members. iv. Informed—Informed of project progress. G. HOW—Data Governance Policies and Standards 1. Assessing an organization's data-related risks is important to developing effective data governance. (See the Enterprise Risk Management Frameworks module for more about assessing risk.) This assessment will include identification of data-relevant laws and regulations with which the organization must comply. Stage 2—Establish and Evolve the Data Architecture A. Stage 2 discusses the data standardization that must occur to facilitate the data architecture.Data architecture describes "the structure and interaction of the major types and sources of data, logical data assets, physical data assets and data management resources of the enterprise" (ISACA 2020). 1. A logical data asset model shows the data at the level of business requirements. For example, in an accounts receivable system, a logical data model would show the entities (e.g., customers, products, sales prices, sales orders, sales transactions) and their relationships (e.g., customers can have multiple sales orders; each product has only one sales price). 2. A physical data asset model shows how the data are stored in the organization's accounting system.For example, the sales price field is stored in the Sales Lookup Table in US $ as a real number with eight digits including two decimal places. C. Data Standardization Requirements 1. Harvested data is often messy or polluted. Cleaning this data requires an ETL. (See the "Data Analytics" lesson for a description of data cleaning.) 2. This process has two goals: a. To clean and standardize data for use and reuse b. To standardize the data management process to achieve greater efficiency and data quality D. Data Models to Be Standardized 1. A typical process of standardizing data models recognizes three levels of data: a. Conceptual—A high-level, abstract, enterprise-wide view b. Logical—A level that adds details to the conceptual level to more completely describe the business requirements for the data c. Physical—The level that specifies how data will be encoded and stored in a database (e.g., SQL and NoSQL) and considers issues of processing speed, accessibility, and distribution (e.g., cloud versus local storage) E. Establish and Standardize Metadata and Master Data 1. ZA's data governance is complicated by having datasets that were created in multiple predecessor companies that were acquired by ZA. Because of this, ZA must engage in data mapping, or converting data from multiple previous systems into a standardized data map that will be used for the enterprise-wide data architecture. 2. The data map specifies how the old data set will be converted to the new, standardized, enterprise-wide data structure. 3. Master data is the core data that uniquely identify entities such as customers, suppliers, employees, products, and services. Master data is stable; it changes infrequently. 4. Metadata is described in the "Big Data" lesson as "data about data." F. Publish and Apply the Data Standards 1. The enterprise-wide data standards are encoded in the data dictionary, which "is a central repository where detailed data definitions can be found as the single source of trust" (ISACA 2020). VI. Stage 3—Define, Execute, Assure Data Quality and Clean Polluted Data A. Good Metadata Strategy Leads to Good Data Quality 1. After creating standards for data classification and taxonomy, the organization can create a metadata strategy that ensures high-quality, reusable data. 2. The metadata strategy is often best focused on the organization's data lake or data warehouse since this is where most of the shared data resides. B. Define Data Quality Criteria—Three General Categories 1. Next, the organization must specify which attributes of data quality matter and why. The ISACA COBIT models (5 and 2019) include three broad categories of information quality: a. Intrinsic—The extent to which data values conform with actual or true values b. Contextual—The extent to which information is relevant and understandable to the task for which is it collected c. Security/Accessibility—Controls over information availability and accessibility C. Execute Data Quality 1. Governing ongoing data quality is a joint project of business units and IT. IT manages the technical environment. Business units establish the rules and are ultimately responsible for data quality. D. Regular Data Quality Assessment 1. Ongoing and periodic assessments of data quality are an application of the principles found in the COSO Internal Control Monitoring Purpose and Terminology and Internal Control Monitoring and Change Control Processes lessons. VII. Stage 4—Realize Data Democratization A. What is data democratization? In the ZA data governance example, much of the process of identifying and standardizing databases and data management processes occurs in committees that include IT and the business units. However, we clean and standardize data in order to make it available to users. Data democratization is the process of creating a single-source, searchable, curated database that is shared across the organization. B. Security and privacy are fundamental to data democratization. Obviously, views of the data are managed to limit access to data subsets, as appropriate to a user's role and associated permissions. VIII. Stage 5—Focus on Data Analytics A. The primary purpose of data governance is to enable data analytics, which is discussed in the Data Analytics module. Correct. The primary and foreign keys that are used in the specific database in which the data model is implemented are properties of the physical data model. The physical data asset model shows how the data are stored in the organization's accounting system.

The COBIT Model of IT Governance and Management

The Control Objectives for Information and Related Technology (COBIT) Framework A. Introduction 1. Although there are many available models for IT governance, COBIT is a widely used international standard for identifying best practices in IT security and control. COBIT provides management with an information technology (IT) governance model that helps in delivering value from IT processes, and in understanding and managing the risks associated with IT. In addition, COBIT provides a framework that helps align IT with organizational governance. 2. COBIT bridges the gaps between strategic business requirements, accounting control needs, and the delivery of supporting IT. COBIT facilitates IT governance and helps ensure the integrity of information and information systems. 3. COBIT is consistent with, and complements, the control definitions and processes articulated in the COSO and COSO ERM models. The most important differences between COSO and COSO ERM and COBIT are their intended audiences and scope. The COSO and COSO ERM models provide a common internal control language for use by management, boards of directors, and internal and external auditors. In contrast, COBIT focuses on IT controls and is intended for use by IT managers, IT professionals, and internal and external auditors. 4. The COBIT framework is organized around the following components: a. Domains and processes—The IT function is divided into four domains within which 34 basic IT processes reside: i. Planning and organization—How can IT best contribute to business objectives? Establish a strategic vision for IT. Develop tactics to plan, communicate, and realize the strategic vision. ii. Acquisition and implementation—How can we acquire, implement, or develop IT solutions that address business objectives and integrate with critical business process? iii. Delivery and support—How can we best deliver required IT services including operations, security, and training? iv. Monitoring—How can we best periodically assess IT quality and compliance with control requirements? Monitoring IT processes are identified as particularly relevant for the CPA Exam. The COBIT model identifies four interrelated monitoring processes: 1. M1. Monitor and evaluate IT performance—Establish a monitoring approach, including metrics, a reporting process, and a means to identify and correct deficiencies. 2. M2. Monitor and evaluate internal control—This is required by the Sarbanes-Oxley Act (SOX) Section 404. 3. M3. Ensure regulatory compliance—Identify compliance requirements and evaluate, and report on, the extent of compliance with these requirements. 4. M4. Provide IT guidance—Establish an IT governance framework that aligns with the organization's strategy and value delivery program. b. Effective IT performance management requires a monitoring process. This process includes the following: i. Information criteria—To have value to the organization, data must have the following properties or attributes: 1. Effectiveness 2. Efficiency 3. Confidentiality 4. Integrity 5. Availability 6. Compliance 7. Reliability ii. IT resources—Identify the physical resources that comprise the IT system: 1. People 2. Applications 3. Technology 4. Facilities 5. Data c. More than 300 generic COBIT control objectives are associated with the 34 basic IT processes identified in COBIT. The COBIT model, the components mentioned above, and the 34 basic IT processes are summarized in the following figure: Look at figure d. Within the figure, items M1 to M4 are the processes related to monitoring, items PO1 to PO11 are the processes related to planning and organization, and so on. www.youtube.com/watch?v=bg_GEN8AZA0

Managing Cyber Risk: Part II—A Framework for Cybersecurity

B. In 2013, then-President Obama issued an executive order to enhance the security and resilience of the U.S. cyber environment. The resulting cyber risk framework was a collaboration between the government and the private sector. C. The goals of the framework included creating a common language for understanding, and cost-effective means for managing, organizational cybersecurity risks without imposing regulations. Specifically, the framework provides a means for organizations to: 1. Describe current cybersecurity and existing risks 2. Describe target or goal cybersecurity and desired types and levels of risk 3. Identify and prioritize improvements to cybersecurity 4. Assess progress toward the cybersecurity goal 5. Communicate with stakeholders about cybersecurity and risk The framework complements, but does not replace, existing risk management (e.g., COSO) and cybersecurity (e.g., COBIT) approaches. The framework consists of three parts: the core, the profile, and the implementation tiers. A. Framework Structure 1. The core includes cybersecurity activities, outcomes, and references (i.e., standards and guidelines). 2. The profiles help align organizational cybersecurity activities with business requirements, risk tolerances, and resources. 3. The implementation tiers are a way to view and understand alternative approaches to managing cybersecurity risk. B. The framework core is a matrix of four columns or elements by five rows or functions that lists activities (with examples) to achieve specific cybersecurity outcomes. The four core elements (functions, categories, subcategories, and references) are types or levels (shown as columns), and the five functions or activities (identify, protect, detect, respond, recover) appear in the following figure: (look at figure on chapter). The core elements relate to one another. A. Functions organize basic, high-level cybersecurity activities and include: Identify, Protect, Detect, Respond, and Recover (see descriptions below). They help manage cybersecurity risk by organizing information, enabling risk management, addressing threats, and enabling learning through monitoring. 1. The functions align with existing methods for incident management and help in assessing the value of cybersecurity investments. For example, investments in cybersecurity planning improve responses and recovery, which reduces the effect of cybersecurity events on service quality. B. Categories are high-level cybersecurity outcomes that link to organizational needs and activities. Examples of categories are: asset management, access control, physical security, and incident detection processes. C. Subcategories divide categories into specific outcomes of technical and/or management activities. In accounting and auditing terms, these are high-level control goals. Examples include: Identify and catalog external information systems; Protect data at rest; and Investigate notifications from detection systems. D. (Informative) References are specific standards, guidelines, and practices that provide benchmarks and methods for achieving the control goals (i.e., outcomes) found in the subcategories. IV. The five core functions (listed here) may be performed periodically or continuously to address evolving cybersecurity risks. A. Identify—Develop the foundational understanding to manage organizational cybersecurity risk by identifying and assessing organizational systems, assets, data, and capabilities. 1. Identification activities include understanding the business context, the resources that support critical functions, and the related cybersecurity risks. Examples of outcome categories within this function include: asset management, business environment assessment, governance assessment, risk assessment, and risk management strategy. B. Protect—Develop and implement controls to ensure delivery of critical infrastructure services. 1. The protect function enables preventing, detecting, and correcting cybersecurity events. Examples of outcome categories within this function include: access control, awareness and training, data security, information protection processes and procedures, maintenance, and protective technology. C. Detect—Develop and implement controls to identify cybersecurity incidents. This topic is covered in the "Computer Crime, Attack Methods, and Cyber-Incident Response" lesson. Examples of outcome categories within this function include: anomalies and events, security continuous monitoring, and detection processes. D. Respond—Develop and implement controls to respond to detected cybersecurity events. Examples of outcome categories within this function include: response planning, communications, analysis, mitigation, and security improvements. E. Recover—Develop and implement controls for building resilience and restoring capabilities or services impaired due to a cybersecurity event. Examples of outcome categories within this function include: recovery planning, improvements, and communications. V. Implementation Tiers A. Implementation tiers identify the degree of control that an organization desires to apply to cybersecurity risk. The tiers range from partial (Tier 1) to adaptive (Tier 4) and describe an increasing degree of rigor and sophistication in cybersecurity risk management and integration with an organization's overall risk management practices. 1. Organizations should determine their desired tier, consistent with organizational goals, feasibility, and cyber risk appetite. Given rapidly changing cyber risks, the assessment of tiers requires frequent attention. B. The four tier definitions are as follows: 1. Tier 1: Partial a. Risk Management—Organizational cybersecurity risk management practices are informal. Risk is managed as ad hoc and reactive. Prioritization of cybersecurity activities may not be directly informed by organizational risk objectives, the threat environment, or business requirements. b. Integrated Risk Management Program—Limited awareness of cybersecurity risks with no organization-wide approach to managing cybersecurity risk. Cybersecurity risk management occurs irregularly on a case-by-case basis. Organizational sharing of cybersecurity information is limited. c. External Participation—Organization has weak or nonexistent processes to coordinate and collaborate with other entities. 2. Tier 2: Risk Informed a. Risk Management—Management approves risk management practices when needed but not as a part of an organizational-wide policy. Prioritization of cybersecurity activities is informed by organizational risk objectives, the threat environment, or business requirements. b. Integrated Risk Management Program— While there is some awareness of organizational cybersecurity risk, there is no established, organization-wide approach to managing cybersecurity risk. Risk-informed, management-approved processes and procedures are defined and implemented, and staff have adequate resources for cybersecurity duties. Organizational cybersecurity information sharing is informal and as needed. c. External Participation—The organization assesses and understands its cybersecurity roles and risks but has not formalized its capabilities to share information externally. 3. Tier 3: Repeatable a. Risk Management Process—The organization's risk management practices are formally approved as policy. Organizational cybersecurity practices are regularly updated based on the application of risk management processes to changes in business requirements and changing threats and evolving technologies. b. Integrated Risk Management Program—Organization-wide management of cybersecurity risk exists. Management has risk-informed policies, processes, and procedures that are defined, implemented, and regularly reviewed. Consistent, effective methods respond to changes in risk. Personnel possess the knowledge and skills to perform their appointed roles and responsibilities. c. External Participation—The organization understands its dependencies and communicates with cyber security partners to enable collaboration and risk-based management in response to incidents. 4. Tier 4: Adaptive a. Risk Management Process—The organization adapts its cybersecurity practices based on experience and predictive indicators derived from cybersecurity activities. Continuous improvement processes include advanced cybersecurity technologies and practices. The organization actively adapts to a changing cybersecurity landscape and responds to evolving and sophisticated threats in a timely manner. b. Integrated Risk Management Program—An organization-wide approach to managing cybersecurity risk uses risk-informed policies, processes, and procedures to address cybersecurity events. Cybersecurity risk management is part of the organizational culture and evolves from an awareness of previous activities, information shared by other sources, and continuous awareness of activities on their systems and networks. c. External Participation—The organization manages risk and actively shares information with partners to ensure that accurate, current information is shared to improve collective cybersecurity before a cybersecurity event occurs. VII. Recommended Framework Applications A. Review and Assess Cybersecurity Practices. The organization asks, "How are we doing?" with respect to cybersecurity and cyber risk, including comparing current cybersecurity activities with those in the core. This review does not replace formal, organization-wide risk management (e.g., guided by COSO). This assessment may reveal a need to strengthen some and reduce other cybersecurity practices and risks. This reprioritizing and repurposing of resources should reduce prioritized cyber risks. B. Establish or Improve a Cybersecurity Program. The following steps illustrate use of the framework to create a new cybersecurity program or improve an existing program. Organizations may repeat these steps as needed. 1. Prioritize risks and determine scope. After identifying its mission, objectives, and high-level priorities, the organization makes strategic decisions regarding the scope and purpose of cybersecurity systems and the assets needed to support these objectives. 2. Link objectives to environment. The organization identifies its systems and assets, regulatory requirements, and overall risk approach to support its cybersecurity program. It also identifies threats to, and vulnerabilities of, those systems and assets. 3. Create current profile. Develop a current profile by indicating which category and subcategory outcomes from the framework core are achieved currently. 4. Conduct risk assessment. Guided by the overall risk management process or previous risk assessment, analyze the operational environment to determine the likelihood of a cybersecurity event and its potential impact. Monitor for and consider emerging risks and threats and vulnerable data to understand risk likelihood and impact. 5. Create a target profile. Assess framework categories and subcategories to determine desired cybersecurity outcomes. Consider additional categories and subcategories to account for unique organizational risks, as needed (e.g., financial fraud risk in financial institutions). Consider influences and requirements of external stakeholders, such as sector entities, customers, and business partners. 6. Determine, analyze, and prioritize gaps. Compare current and target profile to determine gaps. Create a prioritized action plan to address gaps. Determine resources necessary to address gaps. 7. Implement action plan. Determine and implement actions to address gaps. C. Communicate Cybersecurity Requirements to Stakeholders 1. Use the framework's common language to communicate requirements among interdependent stakeholders for the delivery of essential critical infrastructure services. Examples include: - Create a target profile to share cybersecurity risk management requirements with an external party (e.g., a cloud or Internet service provider or external auditor or regulator). -Determine the current cybersecurity state to report results as a part of a control review. -Use a target profile to convey required categories and subcategories to an external partner. -Within a critical infrastructure sector (e.g., the financial services industry), create a target profile to share as a baseline profile. Use this baseline profile to build tailored, target profiles that are customized to specific organizational members' cyber security risks and goals. D. Identify Opportunities to Adapt or Apply New or Revised References 1. Use the framework to identify opportunities for new or revised standards, guidelines, or practices where additional references would help address emerging risks. 2. For example, an organization implementing a given subcategory, or developing a new subcategory, might discover that there are few relevant informative references for an activity. To address that need, the organization might collaborate with technology leaders or standards bodies to draft, develop, and coordinate standards, guidelines, or practices. E. Protect Privacy and Civil Liberties 1. Privacy and civil liberty risks arise when personal information is used, collected, processed, maintained, or disclosed in connection with an organization's cybersecurity activities. Examples of activities with potential privacy or civil liberty risks include: a. Cybersecurity activities that result in the over-collection or over-retention of personal information (e.g., phone records or medical information) b. Disclosure or use of personal information unrelated to cybersecurity activities c. Cybersecurity activities that result in denial of service or other similar potentially adverse impacts (e.g., when cybersecurity results in service outages at an airport or secured government building) 2. Processes for addressing privacy and civil liberty risks a. Governance of cybersecurity risk i. The organization's assessment of cybersecurity risk and responses should consider privacy implications. ii. Individuals with cybersecurity-related privacy responsibilities report to appropriate management and receive appropriate training (e.g., in generally accepted privacy principles [GAPP]). iii. Organizational processes support compliance of cybersecurity activities with applicable privacy laws, regulations, and constitutional requirements. iv. The organization continuously and periodically assesses the privacy implications of its cybersecurity measures and controls. b. Processes for identifying and authorizing individuals to access organizational assets and systems i. Identify and address the privacy implications of data access when such data include collecting, disclosing, or using personal information. c. Awareness and training measures i. Cybersecurity workforce training includes training in organizational privacy policies and relevant privacy regulations. ii. The organization informs service providers of the organization's privacy policies and monitors for service providers for compliance with these policies. d. Anomalous activity detection and system and assets monitoring i. The organization conducts privacy reviews of anomalous activity detection and cybersecurity monitoring. e. Response activities, including information sharing or other mitigation efforts i. The organization assesses and addresses whether, when, how, and extent to which personal information is shared outside the organization as part of cybersecurity assessment and monitoring. ii. The organization conducts privacy reviews of cybersecurity initiatives.

Risks and Controls in Computer-Based Accounting Information Systems

Basic Information Processes in Manual and Automated Accounting Systems A. Basic Processes in a Manual AIS 1. Journalize—Record entries. 2. Post—To general ledger. 3. Summarize—Prepare a trial balance. B. Basic Processes in an Automated AIS 1. Input—Record or capture event data in system (input to storage). 2. Process—Update data storage. 3. Output—Retrieve master data from storage. C. AIS File Organization—The accounting records in an AIS fall into four main categories: 1. Source documents and other data capture records, which may be in manual (e.g., customer order forms, sales invoices, journal vouchers, time cards, etc.) or electronic (e.g., online screen entry, "cookies," auto-fill screens, etc.) form. Where possible, data capture techniques should automate data collection and capture to reduce costs and improve accuracy 2. Data accumulation records (or journals), such as daily cash receipt summaries, weekly payroll summaries, monthly purchases journals, etc. 3. Subsidiary ledgers (or registers), such as accounts receivable, accounts payable, and the fixed asset register, etc. 4. General ledger and financial statement recordsVery few manual accounting systems still exist. However, in all current computerized systems, humans perform some tasks. Hence, manual controls matter (a lot), even in computer-based systems II. Risks in Computerized Systems A. Risks in Computer-Based Systems—Organizational risks depend on management's "risk appetite," and the organization's activities and environment (see the lessons in "Enterprise Risk Management Framework."). The following risks are heightened with computerized, compared with manual, accounting systems: 1. Reliance on faulty systems or programs 2. Unauthorized access to data leading to destruction or wrongful changes, inaccurate recording of transactions, or recording of false or unauthorized transactions 3. Unauthorized changes in master files, systems, or programs 4. Failure to make necessary changes in systems or programs 5. Inappropriate manual intervention 6. Loss of data B. All organizations using computer-based systems face these risks. Their significance and the degree of control necessary to mitigate them varies across organizations. III. Comparison of Risks in Manual versus Computer-Based Transaction Processing Systems—Although the objectives of controls in manual and computer-based systems are the same, the risks present in the two systems differ; consequently, the control procedures necessary to mitigate these risks also differ. Some of the implications of manual versus computerized systems for internal control are summarized below. A. Segregation of Duties—A fundamental control in manual systems, the segregation of duties, is discussed in the "Risk Management" lesson. In a computerized environment, transaction processing often results in the combination of functions that are normally separated in a manual environment. For example, when cash receipts are processed by a cashier, the cash deposit, the cash receipts journal, and the A/R subsidiary ledger are (usually) all updated by a single entry. In a manual environment, at least two of these functions would normally be segregated. 1. In these instances, a well-designed computer system provides a compensating control. a. Continuing with the cash receipts example: in a manual system, when the same person: (1) records the cash receipt, (2) prepares the bank deposit and (3) updates the customer's account in the accounts receivable ledger, then lapping (i.e., posting Customer A's payment to Customer B's account to cover up the earlier theft of the Customer B's payment) is possible. In an automated system, the computer program prevents this fraud by ensuring that the same customer is identified with the cash receipt, the bank deposit, and the accounts receivable posting. b. As is also discussed in the "Risk Management" lesson, segregation of duties (SoD) software can help identify and resolve segregation of duty conflicts. B. Disappearing Audit Trail—Manual systems depend heavily on a paper audit trail to ensure that transactions are properly authorized and that all transactions are processed. Physical (paper) audit trails are substantially reduced in a computerized environment, particularly in online, real-time systems. (In many batch systems, source documents still exist and provide an excellent paper audit trail.) 1. Electronic audit trails—Audit trails are built into better accounting information systems software. These are created by maintaining a file of all of the transactions processed by the system (transaction log file), including the username of the individual who processed the transaction; when properly maintained, electronic audit trails are as effective as paper-based audit trails. C. Uniform Transaction Processing—Computer programs are uniformly executed algorithms—which is not the case with less-reliable humans. Compared with a manual system, processing consistency increases in a computerized environment. Consequently, "clerical" errors (e.g., human arithmetic errors, missed postings) are "virtually" eliminated. 1. In a computerized environment, however, there is increased opportunity for "systemic" errors, such as errors in programming logic. For example, if a programmer inadvertently entered a sales tax rate of 14% instead of 1.4%, all sales transactions would be affected by the error. Proper controls over program development and implementation help prevent these types of errors. D. Computer-Initiated Transactions—Many computerized systems gain efficiency by automatically generating transactions when specified conditions occur. For example, the system may automatically generate a purchase order for a product when the quantity on hand falls below the reorder point. 1. Automated transactions are not subject to the same types of authorization found in manual transactions and may not be as well documented.Automated transactions should be regularly reported and reviewed. Care should be taken to identify transactions that are more frequent or in larger amounts than a predetermined standard. E. Potential for Increased Errors and Irregularities—Several characteristics of computerized processing act to increase the likelihood that fraud may occur and remain undetected for long periods. 1. Opportunity for remote access to data in networked environments increases the likelihood of unauthorized access. 2. Concentration of information in computerized systems means that, if system security is breached, the potential for damage is much greater than in manual systems. This risk is greater in cloud-based systems.. 3. Decreased human involvement in transaction processing results in decreased opportunities for observation. 4. Errors or fraud may occur in the design or maintenance of application programs. F. Potential for Increased Management Review—Computer-based systems increase the availability of raw data and afford more opportunities to perform analytical reviews and produce management reports. Audit procedures are frequently built into the application programs themselves (embedded audit modules) and provide for continuous monitoring of transactions. 1. The opportunities for increased reporting and review of processing statistics can mitigate the additional risks associated with computerized processing.


Kaugnay na mga set ng pag-aaral

chapter 7: the role of accounting in business

View Set

Geography - Ch. 9 - Caribbean South America

View Set

Section 5.1: Proofs by contradiction (indirect proofs)

View Set

Week 4 Assignment: Blackboard Tools Activity

View Set

Network+ Chapter 4: Network Protocols and Routing

View Set

Domain 5: Identity and Access Management: Answer & Review Questions

View Set