CPA Exam (BEC B4)

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Enterprise Resource Planning (ERP) Systems

Cross-functional enterprise system that integrates/automates bus processes/systems that work together in mfg, logistics, distribution, accting, finance, and HR functions. Comprises a no of modules that can function indep or as integrated system to allow data/info to be shared among deptmts/divisions of large bus. ERP software manages the various functions of bus related to mfg. Doesn't usually have to do w/ planning, but is a back-office system from customer order to fulfillment. Purposes/obj of ERP: -store info in central repository so data may be entered once, and then accessed/used by all deptmts. -act as framework for integrating and improving org's ability to monitor/track sales, exp, customer service, dist, etc. -provide vital cross-functional info quickly to mgmt across org to assist in decision-making (EIS).

Electronic Commerce (E-Commerce) v. Electronic Business (E-Business)

E-commerce is electronic completion of exchange (buying/selling) trans. Can use private network or internet. E-business is a more general term than e-commerce and refers to the use of IT, especially networking/comm tech, to perform bus processes in an electronic form. Exchange of electronic info may or may not relate to purch/sale of goods/services.

Comparison of EDI and E-Commerce

EDI: driven by private VAN network, slower speed (batch), more secure, and more expensive. E-commerce: driven by public internet network, faster speed (OLRT), less secure, less expensive.

Hypertext Markup Language (HTML) and Hypertext Transfer Protocol (HTTP)

HTML: tag-based formatting language for web pages. Provides means to describe the structure of text-based info in a doc and to replicate that info in a web page by using the tags in the text. HTTP: communications protocol used to transfer web pages on WWW. HTTPS is secure version of HTTP that uses secure socket layer (SSL) for its security.

End-User Computing

Hands-on use of computers by end users. Functional users do own info processing activities w/ hardware/software/prof resources provided by org. Common ex is info retrieval from org's database using query language feature of database mgmt systems (DBMS). Data can be extracted and manipulated by end user w/ spreadsheet or analytical tools.

Referential Integrity

In a relational database, this prevents deleting of key values in related records/tables.

Customer Relationship Management (CRM) Systems

Provide sales force automation and customer services to manage customer relationships. Record/manage customer contacts, manage salespeople, forecast sales, manage sales leads, provide/manage online quotes and product specs/pricing, and analyze sales data. Obj is to incr customer satisfaction and incr rev/profitability. Attempts to do this by appearing to mtk to each customer individually. Assumes 20% of customers generate 80% of sales and it's 5-10x more expensive to acq new customer than get repeat bus from old. Also attempts to reduce sales/customer support costs and to identify the best customers and provide them w/ incr levels of service. May drop worst. 2 categories of CRM: 1. Analytical: creates/exploits knowledge of cos current/future customers to drive bus decisions. 2. Operational: automation of customer contacts/contact points.

IT policies

Represent mgmts formal notification to employees regarding entity's objs. Policies around system design should promote communication. To implement, auth and responsibility are assigned through formal job descriptions, employee training, code of coduct, policy and procedures manual, op plans, schedules, and budgets.

Network Topologies

The topology of a network defines the physical config of the devices and the cables that connect them. Topologies that have been employed for LANs/WANs are bus, ring, star, and tree. 1. Bus networks: use a common backbone to connect all of the devices on the network. Signals transmitted over the backbone in the form of messages transmitted to/received by all of the devices (in both directions from the transmitting device). Only the intended device accepts/processes the message, the others ignore it. If any of the devices are down, entire network is. Only one device can transmit at a time, others must wait until backbone is free. If two transmit at once, the two messages will collide and both will need to be transmitted again. Ethernet. 2. Ring networks: formed in a ring w/ each device connected to 2 others. Signals transmitted as messages transmitted to/received from all of the devices sequentially. Only intended device accepts/processes message. If any devices down, entire network is. 3. Star networks: formed in a star w/ each device connected to central hub. Hub controls the transmission. If any devices in a star are down, only that device is down. If hub is down, entire network is down. Telephone devices connected to a PBX as well as many home networks. 4. Tree networks: connect multiple stars into a bus. Each hub is connected to the bus and handles the transmission for its star. Definitions: -backbone: the part of the network that carries the major portion of the traffic. -bandwith: measure of comm medium's info carrying capacity. No of diff definitions exist depending on context. -ethernet: large collection of frame-based networking technologies for LANs. Incorps a no of wiring/speed stds for the physical layer and a common addressing and message format. -file transport protocol (FTP): network protocol used to exchange files. -IPv4: current version of IP w/ 32-bit addresses. IPv6 is newer version w/ 128-bit addresses. W/o 6, internet will run out of network addresses in <10 yrs. IPv4 only allows for ~4b network addresses (232) while IPV6 allows for 2128. -simple mail transfer protocol (SMTP): protocol for transmitting text-based email. Provides outbound, not inbound, mail transport. -simple network mgmt protocol (SNMP): protocol for monitoring a network. In SNMP, a piece of software called an agent runs on each of the devices being monitored (called a managed object) and reports info to network mgmt/monitoring system. -single log-in: also called single sign-on. System that allows a user who utilizes several diff systems the ability to log in to them all w/ 1 user ID and PW. Attempts to combat proliferation of diff user IDs and PWs that may occur in an org w/ multiple security systems, possibly for multiple hardware platforms (each PW w/ diff rules and expiration dates). -storage area network (SAN): network containing remote storage devices (disk arrays, tape libraries, CD arrays) to servers in such a way that they appear to the OS to be local devices. Used mostly in large orgs. -Voice over Internet Protocol (VoIP): routing of voice convos over internet. Multiprotocol label switching is a protocol used on network backbones to label diff types of IP traffic for prioritization; voice traffic needs some priority for reasonable convos.

Transport Center Protocol (TCP)

Transmission protocol of internet protocol suite. Transport layer protocol. Reliable and connection-oriented. A protocol is a set of rules req for e-communications to take place.

Domain Name, Domain Name System (DNS), and Domain Name Warehousing

Domain name: Name incl 1+ internet protocol (IP) addresses (numerical label assigned to each device in a network). In a web address, domain name is becker.com. .com is top-level domain name for commercial orgs. Sim are .gov, .edu, .org, .mil. Becker is second-level domain name and www indicates that comp w/ that address is a web server. Orgs w/ second-level domain names have to have DNS server. 3rd level name is individual host (Olinto). Entire address is fully qualified domain name. Anything after .com is a file name. DNS root server administers top-level domain names. Domain Name System (DNS): System of domain names employed by internet. Internet is based on IP addresses, not domain names, and each web server req a domain name server to translate domain names into IP addresses. These are basically phone books. Domain name warehousing: Practice of obtaining control of domain names w/ intent of warehousing (owning w/o using). May do this to prevent others from acq domain names sim to theirs and directing traffic away from their legitimate site. Cos can obtain domains that are misspellings or typos of their name and link them to the right site.

Systems Development Life Cycle (SDLC)

Framework for planning/controlling detailed activities assoc w/ systems development. Waterfall approach is most popular- sequential steps of analysis, planning, design, and implementation. Flow in a single downward direction like a waterfall. Simplifies task scheduling bc no overlapping steps. One step must be completed before next starts. Alternative is prototyping - approximation of final system is built, tested, and reworked as needed until acceptable prototype achieved. Complete system is developed from prototype. Remember Steps in SDLC as *A DITTO*: 1. *A*: Systems *A*nalysis: -define nature/scope of project, identify strengths/weaknesses. -conduct in-depth study of proposed system to determine tech/economic feasibility. -identify the info needs of system users/managers. -document those needs. They're used to develop and document systems reqmts, which are used to select/develop new system. -prep a report summarizing work done and submit to mgmt. 2. *D*: Conceptual *D*esign: where co decides how to meet user needs. Steps: -identify/eval approp design alternatives. Could buy software, develop it in-house, or outsource systems development. -develop detailed specs outlining what system is to accomplish and how it is to be controlled. 3. *D*: Physical *D*esign: conceptual design used to develop detailed specs used to code/test comp programs. Steps: -design output docs. By beginning process w/ identifying outputs, less likely to overlook elements/fields needed in database. -design databases as input docs. -write comp programs. -create files/databases. -develop procs. -develop controls. -identify/acq necessary hardware components. 4. *I*: *I*mplementation and conversion: translates plan into action and then can be used to monitor project. Steps: -installation of new hardware/software. -hiring/relocation of employees to op system. -testing/modifying new processing procs. -establishing/documenting stds and controls for new system. -converting to new system and dismantiling old. -fine-tuning system after it's up/running. 5. *T*: *T*raining: programs: -hardware/software skill training. -orientation to new policies/procs. -variety of training options like vendor-based programs, self-study manuals, computer-assisted instruction, video presentations, etc. 6. *T*: *T*esting: -tests of effectiveness of docs/reports, user input, op control procs, processing procs, and comp programs. -tests of capacity limits and backup and recovery procs. 7. *O*: *O*perations and maintenance: system S/B periodically reviewed during its life. Modifications made as probs/new needs arise. If major modification/new system needed, SDLC starts over. This phase and the cycle overall may incl planning, managing behavioral reactions to change, and assessing ongoing feasibility of the project.

COBIT IT Resources

IT uses clearly defined processes to deploy people skills and tech infrastructure to run automated bus applications and leverage bus info. Resources/processes = enterprise architecture. 1. Applications: automated user systems and manual procs that process info. Payroll systems. 2. Information: data in its broadest form at all stages of processing. Includes raw input data, processed data (at various stages), and output info. 3. Infrastructure: tech/facilities that enable data processing. Hardware, op systems, networking, and physical plant. 4. People: IT professionals who provide IT functions like planning, organizing, acq, implementing, delivering, supporting, monitoring, and evaluating info systems/services. Internal and outsourced resources.

Managing Passwords

Passwords (PW) are designed to protect access to secure sites and info. First rule in PW policy is every acct must have one. Strong PW mgmt policy addresses these characteristics: 1. PW length: longer more effective. Usually min 7-8 chars. 2. PW complexity: more complex more effective. Generally need 3 of these 4 chars: -uppercase chars -lowercase chars -numeric chars -ASCII chars (symbols) 3. PW age: no true std, but should change frequently to be effective. Every 90 days good policy, more if admin. 4. PW reuse: no true std, but PWs shouldn't be reused until sig amt of time has passed. Prevents alternation btw PWs.

Data Processing Cycle Step 4: Information Output

Trans are processed and may be summarized and used to produce outputs. Forms: -documents: operational docs. Checks, POs, receipts, etc. -reports: either internal (sales analysis) or external (F/S). Info periodically produced to satisfy needs of audiences. Reports are static and must be reevvaluated periodically for relevance. Usually incl summarized info or data sorted/grouped to meet user needs. -query: request for specific data. Users enters query, system gives response according to user-specified parameters. Can be designed/saved to produce reports users need on reg basis. Common report types/topics are budgets, production/delivery schedules, and performance reports.

Role of Technology Systems in Control Monitoring: Types of Controls

Two major categories of IT controls are general and application controls. 1. General controls: ensure org's control environment is stable and well-managed. Include: -systems development stds. -security mgmt controls. -change mgmt procedures. -software acq, development, ops, and maintenance controls. 2. Application controls: prevent, detect, and correct errors/fraud and are application-specific. Provide reasonable assurance abt system accuracy, completeness, validity, and auth. Input controls: garbage in, garbage out. Regulate integrity of input. -data validation at field level (edit checks, meaningful error messages, input masks). -prenumbering forms to verify all input is accted for and no duplicate entries. -well-defined source data preparation procedures. Used to collect/prepare source docs. Source docs don't always exist if data entered automatically w/ web-based app or doc scanning. Processing controls: -data matching: matching 2+ items of data before taking action improves trans processing (match vendor invoice, PO, and receiving report before paying). -file labels: ensures correct/most current files updated. External labels readable by humans, internal labels machine-readable. Both S/B used. External labels more easily altered so internal more secure. Important internal labels include (a) header record- located at beginning of each file and has file name, expiration date, and other ID data. (b) trailer record- located at end of file and has batch totals calced during input. -recalc of batch totals: or hash totals. Comparison of input/output amts ensures correct volume of trans processed. Hash totals can confirm correct source docs included. If used diff invoice w/ same amt, batch total would agree but hash wouldn't. -cross footing and zero balance tests: Test sum of column of row to sum of row of column to verify identical results. Helps ensure accuracy. Zero-balance test req use of control accts. Nonzero balance at end indicates processing error. -write protection mechanisms: guard against accidental writing over or erasing of data files on magnetic media. Must remember that these can be easily removed. -database processing integrity procedures: database systems use database admins, data dictionaries, and concurrent update controls to ensure processing integrity. (a) admin establishes/enforces procs for accessing/updating database. (b) data dictionary ensures data items defined and used consistently. (c) concurrent update controls protect records from errors that occur when 2+ users try to update same record at once. Locks out one user until system has processed other one's update. Ouput controls: -user review: examination by users of output for reasonableness, completeness, and verification that output is provided to intended recipient. -reconciliation procedures: reconciliation of indiv trans and other system updates to control reports, file status, or update reports (reconcile input control totals to output control totals). -external data reconciliation: reconciliation of database totals w/ data maintained outside of system (no of employees in payroll file to HR total). -output encryption: authenticity/integrity of data outputs S/B protected during data transmission. Do this w/ encryption techniques to reduce the possibility of interception. Also do this by designing controls to minimize risk of transmission errors. Ex: (1) when receiving unis detects data transmission error, requests sending unit to retransmit. Usually automatic. (2) parity checking and message acknowledgement techniques are data transmission controls. Parity checking is summing the bits in a byte and adding a 0 or 1 to make the byte even/odd for even/odd parity. If message arrives and a bit has changed in transmission, recog and message resent. Plain text is unencrypted, goes through encryption algorithm (which has a key) to produce encrypted cipher text. Ciphertext goes through internet to source, which decrypts it to plain text using the key.

Data Storage Definitions

-Bit: binary digit (0 or 1) w/ which all comp data is stored. -Byte: group of 8 bits that can rep a # or letter w/ specific form dependent on what internal rep format is being used. Also called chars. 1 KB= 1000 bytes. 1 MB = 1m bytes. 1 GB = 1b bytes. 1 TB = 1t bytes. -field: group of bytes in which a specific data element like employee name stored. -record: group of fields that reps data being stored for a particular entity like customer. -file: collection of related records, often arranged in a sequence. -traditional file storage: data stored in files w/ formats/org specific to app system. Resulted in same data stored in multiple files w/ diff names/formats, so diff values for same data. If size/format of data element changed, programs utilizing it often had to be changed (program-data dependence). Addressed by databases. -database: integrated collection of data records/files. Stored data. Centralizes it and minimizes redundant data. Structure of data in database often provides data relationships that start to change it into info. -DBMS: tool. Sep comp program allowing org to create new databases and use/work w/ the data in databases after they've been created. Allows maintenance to be performed on database after it's in op. Maintenance may be addition/subtr of data elements or changes to database structure. DBMS usually incl data dictionary/respository in which indiv data elements defined. Access and Oracle. -relational technology: early databases orged records in hierarchies/trees (like org chart) implemented by indexes/linked lists. Now most databases are relational, so data is in 2D tables that are related to one another w/ keys, Initial relational DBMS were slower than hierarchies, but had easier definition/access. Relational databases often have ad hoc report writers. Normalization is the process of sep data into logical tables. Often, data modeling process used. Before relational database can be designed, normalization must occur. -object-oriented databases: newer type of database. Conventional relational/non-relational databases were designed for homogeneous data that can be structured into predefined fields/records orged in rows and tables. Comments, drawings, images, voice, and video don't fit this def. Obj-oriented databases store them. More flexible, but slower than relational.

Hardware Terminology

1. Central Processing Unit (CPU): control center of computer system. Components: -processor: interprets program instructions, coordinates input/output and storage devices (control unit). Performs math calc (arithmetic logic unit). -primary storage: main memory. Stores program instructions and data until program instr can be executed. For personal computers, this is subdivided into random access memory (RAM), which stores data temporarily while it's processed, and read-only memory (ROM), which perm stores data needed to power the comp. Virtual memory isn't real memory. Portions of program not being executed are stored on disk as virtual memory pages and retreived and brought into actual physical memory when needed. 2. Secondary storage devices: hard drives, magnetic disks, flash drives, CDs, optical discs, magnetic tape. Permanently store programs/data. With random storage devices, data accessed randomly. W/ sequential devices like tapes, data accessed sequentially (faster!). Redundant Array of Independent Disks (RAID) often used for disk storage. Combines multiple inexpensive disk drives into array to obtain performance, capacity, and reliability that exceed that of single large disk drive. Can be implemented in hardware/software/combo, but hardware= best performance. 3. Peripherals: devices that transfer data to/from CPU but don't take part in actual data processing. Include input/output devices. Lots of hardware can act as both. -input: supply data to be processed. Keyboards, mice, scanners, magnetic ink character readers (MICR) touch screens, microphones. -output: transfer data from processing unit to various output media. Printers, speakers, cathode ray tubes (monitors), and plotters (graphic printers). 4. Classes of Processors: those usually found in bus environments are mainframes, midrange and midcomputers, and personal comps. Supercomps used for specialized processing. Mainframes include specialized processors for certain specialized functions like input, output, and telecomm, which are relatively slow. -processing power often descr in terms of millions of instructions per second (MIPS), but many other factors determine overall processing power of comp besides processor. Speed of input/output devices can be just as important. -multiprocessing is coordinated processing of programs by >1 processor. Divided into symmetric multiprocessing (one OS controls processing) and parallel processing (each processor has own OS). Multiprogramming is several parts of program running at the same time on 1 processor. Parallel processing is simultaneous use of >1 comp to execute a program, which first has to be divided into parts that can be sep executed.

Other E-Commerce Technologies

1. Electronic Funds Transfers (EFT): form of e-pmt for banking/retailing industries. Uses variety of tech to transact, process, and verify money transfers and credits. Fed Reserve Fedwire System (automated claring house network) used a lot to reduce time/exp req to process checks/credit trans. -EFT often provided by 3rd party vendor who acts as intermediary btw co and banking system. Might accept trans from bus and perform all of the translation services. Insured and bonded. -EFT security provided through data encryption. -EFT reduces need for manual data entry, so less error. 2. Application Service Providers (ASP): provide access to app programs on rental basis. Allow smaller cos to avoid high cost of owning/maintaining app system by allowing them to pay for what's used only. ASPs own/host software and users access it w/ web browser. ASP is responsible for software updates and will usually provide backup services for users' data. Benefits of ASP are lower costs from hardware/software/ people, and greater flexibility. Great for small bus bc no need to hire systems experts for ASP services. Disadv are possible risks to security/privacy of org's data, financial viability or lack thereof, and possible poor support by ASP. Concepts sim to ASP: -IBM has a sim concept in utiliy computing and e-commerce on demand -timesharing providers or service bureaus that rented computing power in past, except ASPs rent apps instead of just computer processing. -present-day service bureaus that perform processing outside org (for payroll/HR).

Types of Disaster Recovery

1. Use of a Disaster Recovery Service: contract w/ outside providers for DR services. Diff levels/types of service can be provided, from an empty room to complete facilities across the country where end users can be located. Major emphasis is on hardware and telecomm services. 2. Internal Disaster Recovery: Some orgs w/ rqmt for instantaneous resumption of processing after a disaster (NYSE, banks, brokerage houses) provide own duplicate facilities in sep locations. Data might be mirrored (updated/stored in both locations) and processing can be switched almost instantaneously from one location to other. Duplicate data center and data mirroring is EXPENSIVE and most orgs get cheaper plans. 3. Multiple Data Center Backups: Some orgs w/ multiple data centers plan to use one data center to back up another, assuming enough capacity to process essential apps. Must decide what types of backups to perform. Types: -full backup: longest. Exact copy of entire database. Time-consuming, so most orgs do them weekly and do daily partial backups. There are 2 types of partial backups, below. -incremental backup: shortest. Copy only data items that have changed since last backup. Produces set of incremental backup files, each w/ results of 1 days trans. Restoration involves first loading the last full backup and then installing each subsequent incremental backup in proper sequence. -differential backup: middle. Copy all changes since last full backup. Each new differential backup file has cumulative effects of all activity since last backup. Except for first day following full backup, daily differential backups take longer than incremental. Restoration is simpler bc last full backup needs to be supplemented w/ only most recent differential backup instead of a bunch of incrementals. Most orgs make incremental and differential backups daily.

Threats in a Computerized Environment

1. Virus: piece of a comp program that inserts itself into some other program, including op systems, to propogate and cause harm to files/programs. Req host program and can't run independently. 2. Worm: program (and special type of virus) that can run independently and normally propogates itself over a network. Can't attach itself to other programs. 3. Trojan horse: program that appears to have a useful function but has a hidden unintended function that represents a security risk. Doesn't usually replicate itself. 4. Denial-of-service attack: one comp/group of comps bombards another comp w/ a flood of network traffic. Program comps (zombies) to send requests to access cos site all at once. Bc of high volume, web server system crashes. Takes hours-days to recover from these. 5. Phishing: sending fake emails to try to lure people to fake websites where they're asked for info that will allow the phisher to impersonate them. Can mimic co emails. Cos don't request confirmation of usernames/PWs/acct nos, etc. via email. Users should be informed to go directly to co website to determine any changes or find info related to accts.

Development and Management of Security Policies

3 level model can be used to develop comprehensive set of system policies: 1. Security objectives: step 1 is to define security objs. Should consist of series of stmts to descr meaningful actions abts specific resources. S/B based on system functionality or mission rqmts and state security actions to support rqmts. Security objs might relate to confidentiality, data integrity, auth, access, resource protection, and other issues. 2. Operational security: should define manner in which specific data op would remain secure. Op security for data integrity might consider definition of auth/unauth modification: individuals auth to make modifications by job category, org placement, name, etc. 3. Policy implementation: security enforced through combo of technical/traditional mgmt methods. Technical means likely include use of access control tech, but other automated means of enforcing/supporting security policy exist. Tech can be used to block phone users from calling some nos. Intrustion detection software can alert system admins abt suspicious activity or stop it. Personal comps can be configured to prevent booting from external drive. Policies are defined as stmts of mgmt's intent. Docs that support policies: -regulations: laws, rules, regs usually rep govt imposed restrictions like SOX, HIPPA, etc. -stds and baselines: topic-specific and system-specific docs that descr overall rqmts for security are called, respectively, stds and baselines. -guidelines: provide hints, tips, and best practices in implementation. -procedures: step-by-step instructions on how to perform a specific security activity (config firewall, install OS, etc.).

Major Uses of a DBMS

4 main functions. 1. Database development: Procedure by which database admin uses DBMS to create new, empty database. Once created (elements/structure defined), data can be imported by DBA from other sources like traditional files (data conversion) or input by end users. 2. Database query: Process by which end users get specific data/info from database. End user sets criteria and all data that meet criteria displayed on comp screen or in printable report. End user often must have basic DBMS or database structure knowledge. Poorly written queries can be hazardous to DBMS performance. DBA should monitor database useage and try to discover poorly written queries. Database query most often provided in relational database by structured query language (SQL), which provides ability to select data from indiv tables in database based on which satisfy certain conditions. Also provides ability to join tables like suppliers, part nos, etc. SQL consists of data definition language (DDL) which defines database, data manipulation language (DML) which queries it, and data control language (DCL). 3. Database maintenance: Updating of DBMS software and revision of database structure to reflect new bus needs. Includes testing the effectiveness/efficiency of database, via built-in diagnostic and maintenance programs. This is called database tuning. Database is running effectively if accurately recording data. Running efficiently if op fast enough. 4. Application development: DBMS allows DBA or comp programmer to use programming language or a series of macros to turn a database into a comp software app. A macro is a series of prerecorded commands that will be executed when certain events occur. Rather than teaching end users how to create queries/reports w/ DBMS, database can be automated and converted into app of user-friendly screens/forms anyone can use.

Types of Policies

4 types that start out at high level and become more specific at lower levels. 1. Program-level policy: used for creating mgmt-sponsored comp security program. At highest level might prescribe need for info security and delegate creation/mgmt of program to a role w/in IT dptmt. Mission stmt for IT security program. 2. Program-framework policy: establishes overall approach to comp security (framework). Adds detail to program by descr elements/org of program and deptmt that will carry out security mission. IT security strategy. 3. Issue-specific policy: address specific issues of concern to org. 4. System-specific policy: focus on policy issues that exist for a specific system.

B2B v B2C

B2C less complex bc of IT infrastructure/supply chain on only one side of trans. B2B trans have more participants in each indiv trans, more complex products, and req order fulfillment to be more certain/predictable. Pmt mechanisms for B2B more complex, and pmts may be negotiated amts and usually involve invoicing. B2C pmts made at POS. Internet trading exchanges (e-marketplaces) are aggregation points tailored to specific mkts bringing buyers/sellers together to exchange goods/services. May be managed by buyers, suppliers, distributors, or content aggegators (3rd parties that pull info together from multiple websites). Digital cash (e-cash) is electronic currency that moves outside normal money channels. Can be used by people who want to make internet purch but don't want to use credit cards (PayPal). B2C has more consumer protection.

Advantages and Disadvantages of a DBMS

Advantages: -reduction of data redundancy/inconsistency: reduced so data only entered once and stored at 1 location. -potential for data sharing: so that existing and newer apps can share same data. -data independence: definition of data/data itself are sep from programs using them, so that data storage structures or access strategies in the database can change w/o affecting the data itself and w/o affecting the programs that process it. DBMS provides interface that allows apps to access the data w/o knowing exactly where it resides. -data standardization: facilitates data interchange btw systems. -improved data security: most DBMS have own security systems, which might supplement external security systems. -expanded data fields: can be expanded w/o adverse effects on app programs. Aspect of data independence. -enhanced info timeliness/effectivness/availability: increased. Disadvantages: -cost: purch of DBMS and conversion to it. -highly trained personnel needed: DBA technical position, req considerable amt of training/experience in specific DBMS being utilized. -increased chances of breakdowns: w/ common integrated database, incr repercussions of hardware/software breakdowns. Specific precautions must be taken to replicate part/all of the data. -possible obscuring of audit trail: as a result of data movement btw files. -specialized backup and recovery procs req: necessary, especially if databases distributed/replicated.

Wide Area Networks (WANs)

Allow national/international comms. Usually employ nondedicated public comm channels (fiber optic, terrestrial microwave, or satellite) as their comm media. WAN comm services may be provided by value-added networks, internet-based nerworks, or point-to-point networks (direct private network links normally using leased lines). Value Added Networks (WANs): Privately owned/managed comm networks that provide additional services beyond std data transmission. Often used for EDI. -they provide lots of additional services, incl auto error detection, protocol conversion, and message storing/forwarding. W/ VANs, the various parties trying to comm don't have to use same network protocols, VAN provides translation. -these provide good security bc they're private. -often batch transactions and send them at night when line traffic is lower. This periodic procesing can delay data transfer for hours-days. -messages sep by vendor, batched together, and transmitted to their specific destinations. -normally charge a fixed fee + a fee per trans. Can be too expensive for smaller cos. Internet-Based Networks: Use internet protocols and public comm channels to establish network comms. Internet itself is an international network of composed servers around the world that comm w/ each other. There's no govt involvement/control. Internet service providers (ISPs) provide access for indivs to the internet. IBNs used to establish comm among a co's LANs (intranet) and to transmit EDI trans. Adv to submitting EDI trans over internet rather than using a VAN: -indiv trans transmitted immediately and usually reach destination w/in mins. -internet trans costs are lower, so afforable for smaller cos. -relative affordability of internet incr no of potential trading partners. -internet-based networks sometimes called VPNs. The virtual, sim to virtual memory, means networks aren't really private, just look like they are. Intranets v Extranets: Both use internet protocols and pub comm media rather than proprietary systems (so internet browsers can be used) to create a co-wide network. -intranets: connect geographically sep LANs w/in a co. Cos can use low-cost internet software, like browsers, to build internet sites such as HR and internal job postings. Intranet more secure than internet bc it has restricted user community and local control. -extranets: permit co suppliers/customers/bus partners to have direct access to the co's network.

Business Process Reengineering (BPR)

Analysis and redesign of bus processes and info systems to get performance improvements. Reduces co to its essential bus processes and reshapes org work practices and info flows to take adv of tech advancements. Simplifies the system, makes it more effective, and improves co's quality/service. Bus process mgmt (BPM) software has been developed to help orgs automate many BPR tasks. Reengineering involves efficient/effective use of latest IT. Obstacles to overcome to successfully complete BPR process: 1. Tradition: old habits die hard, esp w/ org culture. Need to change employee culture/beliefs. 2. Resistance: change resistance. Mgmt must continually support those affected by reassuring and persuading them that the changes will work. 3. Time/cost rqmts: BPR costly and often takes 2+ yrs to complete. 4. Lack of mgmt support: mgmt may be afraid of "big hype, few results" syndrome. Need top mgmt support or little chance BPR will succeed. 5. Skepticism: people may view BPR as traditional systems development in new wrapper w/ fancy name. 6. Retraining: employees must be. Time-consuming and expensive. 7. Controls: controls ensuring system reliability/integrity can't be deleted.

Business to Business (B2B)

B2C= bus sells to public. B2B= bus sells to bus. C2C= consumers sell to consumers. Many bus engage in B2B, usually in wholesale mkts and on supply side of commercial processes. Common for this to occur electronically via internet. Internet trans can occur btw businesses w/ no preexisting relationship. Also common for B2B trans to occur electronically btw businesses w/ preexisting relationship (direct mkt transactions. Ex: trans via EDI, corp intranets/extranets). B2B makes purch decisions faster, simpler, safer, more reliable, and more cost effective bc cos can use websites to do research/transact bus w/ lots of diff vendors. Adv of B2B: 1. Speed: faster trans mean faster mfg and resale of products to public. Allows businesses to transact w/ each other more quickly than w/ phone, fax, or mail. Speed is called internet time. 2. Timing: trans don't need to be during norm bus hours. Allows for globalization since businesses in diff countries can transact regardless of diff time zones. 3. Personalization: once bus completes online profile w/ new bus patner, can be guided to website areas where it is most interested whenever it returns to site. 4. Security: trans w/ private info encrypted so that if transmisson intercepted, undecipherable and useless to inceptor. 5. Reliability: bc trans from one comp to another, trans S/B precisely peformed bc generally no opp for human error.

COBIT Information Criteria

Bus rqmts for info. ICE RACE. Know it cold and learn it fast. 1. *I*ntegrity: accuracy, completeness, validity 2.*C*onfidentiality: protection of sentitive info from unauth disclosure. To ensure this, clearly define confidential material and properly train employees in how to identify/protect it. 3.*E*fficiency: delivery of info through optimal resource use. Low cost w/o compromising effectiveness. 4.*R*eliability: info represents what it purports to. Approp to operating the bus. 5. *A*vailability: current/future info provided as req. Safeguarding of info resources. 6. *C*ompliance: info must comply w/ policies, laws, regs, contractual arrangements (internal/external) by which bus process is governed. 7. *E*ffectiveness: info is relevant to bus process, delivered in timely, current, consistent, useful manner.

Functions Performed on Data

Once data is collected/entered, process it. Functions BIS allows bus to perform on data: -collect -process -store -transform -distribute. After BIS is set up/configured by hardware techs, network admins, and software developers, it's functional. Once functional, end user inputs data. After data collected, stored, and processed, info output can be shared across network w/ other end users.

Domains and Processes of COBIT

COBIT defines IT processes w/in 4 domains that direct delivery of solutions and services to ensure directions are followed. Remember PO AIDS ME (when I'm buying comp equip). 1. *PO*: *P*lan and *O*rganize: Direct. Provides direction to solution/service delivery. 2. *AI*: *A*quire and *I*mplement: Solution. Provides solutions for IT needs. 3. *DS*: *D*eliver and *S*upport: Service. Provides IT services to users (translating solutions into services received by end users). 4. *ME*: *M*onitor and *E*valuate: Ensure direction followed. Ensure direction in PO steps are followed in the solution and service processes.

Control Objectives for Information and Related Technology (COBIT) and IT Control Objectives

COBIT provides mgmt, auditors, and users w/ a set of measures, indicators, processes, and best practices to maximize the benefit of IT. Intended to assist in development of approp IT governance/mgmt in org. Created by info systems audit and control association (ISACA) and IT governance institute (ITGI) in 1992. COBIT 5 released in 2012. Business Objectives: anticipate global rqmts. Assoc w/ bus owners/process managers and IT professionals/auditors. Some objectives: -effective decision support. -efficient trans processing. -compliance w/ reporting rqmts (tax) or info security rqmts and national stds for electronic health care trans (HIPAA). Governance Objectives: COBIT anticipates governance framed by 5 focus areas. 1. Strategic alignment: linkage btw bus and IT plans. Defining, maintaining, and validating IT value proposition, w/ focus on customer satisfaction. 2. Value delivery: provision by IT of promised benefits to org while satisfying customers and optimizing costs. 3. Resource mgmt: optimization of knowledge/infrastructure. 4. Risk mgmt: risk awareness by senior mgmt by understanding risk appetite and risk mgmt responsibilities (event identification, risk assessments, and responses). Risk mgmt begins w/ identification of risks and determines how co will respond to risks. Can avoid, mitigate, share, or ignore risks. 5. Performance measurement: tracking/monitoring strategy implementation, project completion, resource usage, process performance, and service delivery. Need to define milestones/deliverables throughout project so progress can be measured.

Centralized v Decentralized (distributed) Processing

Centralization and decentralization are a matter of degree, never 100% of either. Centralized: maintain all data and perform all data processing at central location. If end-user PCs used only to connect to LAN to allow data entry remotely and editing and processing done by programs on central processors, centralized. Exs include mainframe and large server computing apps. Decentralized: If end-user PCs have app software that does part of data validation, decentralized. Computing power, apps, and work are spread out over lots of locations (via a LAN or WAN). Often use dist processing techniques where each remote comp does some of the processing (data validation), reducing processing burden on central comp. Advantages of centralized processing (disadv of decentralized): -enhanced data security: data better secured once at central location. Only one location must be secured rather than many geographically sep ones. -consistent processing: more consistent. Decentralized may cause inconsistent processing at various locations, even if same software is used, bc of timing/probs in only certain locations. Disadvantages of centralized processing (adv of decentralized): -possible high cost: bc of large no of detailed trans. These costs are falling. -incr need for processing power/data storage -reduction in local acctability -bottlenecks: for input/ouput at high-traffic times. -delayed response time: hard for central location to quickly respond to info requests from remote locations. -incr vulnerability: since all processing at 1 location, probs at that location could cause delays/probs throughout org.

Electronic Data Interchange (EDI)

Comp-to-comp exchange of bus trans docs (POs, confirmations, invoices, etc.) in structured formats that allow direct processing of the data by the receiving system. Started w/ buyer/seller trans but has expanded to inv mgmt and product distr. Any std bus doc that is exchangeable btw orgs can be exchanged w/ EDI if both orgs have prepared. Compared to traditional paper processing, EDI reduces trans handling costs and speeds trans processing. EDI req all trans be submitted in a std data format. Mapping is the process of determining correspondence btw data elements in an org's terminology and data elements in std EDI terminology. Once completed, translation software can be used to convert trans btw formats. Several diff stds exist, like ANSI X.12 in US, EDIFACT in Europe, and IPPAA for health care. Extensible markup language (XML) is tech developed to transmit data in flexible formats instead of std EDI formats. Tells systems format of data and what kind of info it is. Uses both std and user-defined tags, sim data formatting tags in HTML for display of web pages. XML is likely to become the std for automating data exchange btw systems. XML extensions are currently being grafted onto EDI, but XML could replace EDI if XML stds are developed/adopted. This is bc XML tags can be read by a lot of software apps. EDI can be implemented using direct links btw org exchanging info, comm intermediaries (service bureaus), value added networks (VANs), or over the internet. VANs operate like email but w/ structured data (co A sends info to VAN, which sends it to co B. Co B can reply via the VAN). Internet based EDI is replacing VAN based EDI bc it's cheaper! Since EDI involves direct processing of data by receiving system, w/ minimal human involvement, controls designed to prevent errors are crucial. Data encryption S/B performed by physically secure hardware bc software encryption may be subj to unauth remote tampering. Audit trails in EDI system should incl: -activity logs of failed trans. -network and sender/recipient acknowledgments. The greatest risk in an org's use of EDI is unauth access to the org's systems.

Supply Chain Management Systems (SCM)

Concerned w/ 4 characteristics of every sale: 1. *What*: goods received should match goods ordered. 2. *When*: goods S/B delivered on/before date promised. 3. *Where*: goods S/B delivered to requested location. 4. *How much*: cost of goods S/B as low as possible. Supply chain mgmt is integration of bus processes from original supplier to customer and incl purch, materials handling, production planning/control, logistics/warehousing, inv control, and product dist/delivery. SCM systems can do some/all of these functions. Cos must reengineer their supply chains to incr efficiency, reduce costs, and meet customer needs. SCM software can help parties to coordinate more efficiently. SCM objectives: 1. Achieve flexibility/responsiveness: in meeting the demands of the customers/bus partners. SCM might incorp the funtions of (1+): -planning (demand forecasting, product pricing, inv mgmt) -sourcing (procurement, credit, collections) -making (product design, production scheduling, facility mgmt) -delivery (order mgmt and delivery scheduling) Each function has dozens of steps, w/ own specific software for lots of them. SC planning software is utilized to improve flow and efficiency of supply chain and reduce inv. SC execution software automates the steps of the SC. SCM is often called an extended ERP system that goes outside of bus and addresses entire SC. Much of info used by an SCM system resides in ERP systems. Bc system extends beyond org, more complex than ERP.

System Software

Consists of programs that run the comp and support system mgmt ops. Operating system: provides interface btw user and hardware. Defines commands that can be issued and how so (typing, clicking, verbal, etc.). Controls all input/output to main memory and may include certain utility programs that might be used stand-alone or in app software. Windows, IBM ones, UNIX, and Linux. Linux is public domain open source OS for PCs, developed in 1991. Popular alternative to Msft bc it's cheap, outside Msft control and licensing, stable, and not as vulnerable to security probs. Database Management System (DBMS): Important for orgs w/ mainframe and midrange comp systems. Controls developmt, use, and maintenance of org databases. Database and DBMS sometimes interchangeable terms, but wrong.

Managing Control Activities

Controls related to use of IT resources. Budgets S/B established for acq of equip/software, op costs, and usage. Actual costs S/B compared to budget and sig discrepancies investigated. IT control procs: 1. Plan of org including approp SOD to reduce opps for anyone to perpetrate and conceal errors/irregularities in ord course of bus. Programmers shouldn't have source code or production data. 2. Procs including design/use of adequate docs and records to ensure proper recording of trans/events. 3. Limits to asset access in accordance w/ mgmt auth. 4. Effective performance mgmt, clear definitions of performance goals and effective metrics to monitor achievement of them. 5. Info processing controls applied to check for proper auth, accuracy, and completeness of indiv trans. 6. Proper design/use of electronic and paper docs/records to ensure accurate and complete recording of relevant trans data. 7. Implementation of security measures/contingency plans. -security measures prevent/detect threats. Data security controls S/B designed to ensure auth is req to access, change, or destroy storage media. -contingency plans detail procs to be implemented when threats are encountered. One goal of this would be to minimize disruption of processing while ensuring integrity of data input and processing. -some active threats can't be prevented w/o making system so secure it's unusable.

Technologies and Security Management Features

Data/procedural controls implemented to ensure data is recorded, errors are corrected during processing, and output is properly distributed. Safeguarding records and files: data can be protected through use of internal/external file labels and file protection rings. All critical app data S/B backed up and stored in a secure off-site location. Backup files: data backups are necessary both for recovery in a disaster situation and for a recovery from processing probs. Copies of master files and records S/B stored in safe places outside of co. Copies of files kept on-site S/B stored in fireproof containers/rooms. -son-father-grandfather concept: most recent file is son, second most recent is father, and preceding is grandfather. Previous file + transactions being processed = new updated master file. Periodic trans files stored sep. If son file destroyed, new file can be created w/ trans file and father file. Always at least 2 backup files that can be used to recreate destroyed file. -backup of systems that can be shut down: pretty simple. Files/databases that have changed since last backup (or all data) can be backed up using S-F-G or sim concept. -backups of systems that can't be shut down: more difficult. Apply trans log (file on trans that had been applied to databases) and reapply those trans to get back to point right before failure. -mirroring: use backup comp to duplicate all processes/ trans of primary comp. Can be expensive. Used by banks and other orgs for which downtime is unacceptable. Uninterrupted power supply (UPS) is a device that maintains a continuous supply of electrical power to connected equip. Also called battery backup. Prevents shutdown during outage. Stops data loss and can protect integrity of a backup while it's being performed. When power failure occurs, UPS switches to its own power source instantaneously so no interruption in power. This isn't a generator, so battery will run out. Still critical so that data won't be corrupted bc backup generator won't provide protection from momentary power interruption. Program modification controls are controls over changes to programs used in production apps. Include both controls designed to prevent changes by unauth personnel and controls that track program changes so record of what versions of what programs are running in production at any specific pt in time.

Types of Databases

Database is a structure that can house info abt multiple types of entities and relationships among them. Types of databases: -operational: store detailed data needed for day-to-day ops of org. -analytical: store data/info extracted from op databases. Summarized data mostly used by mgmt. -data warehouses: store data from current/prior yrs, often from both op and analytical databases. Big use for these is data mining, where data (often large amt of diverse data) is processed to identify trends/patterns/relationships. Limited scope data warehouse is a data mart. Data warehouses often use a structure other than a relational database. Goal of data warehouse isn't trans processing, so other structures allowing redundant data more efficient for data mining. Since data imported from other databases mostly rather than input directly, this redundancy doesn't likely lead to data inconsistencies from input errors. -distributed databases: physically distributed in some manner on diff pieces of local/remote hardware. Depending on circumstances, some data might be replicated while other data might be distinct and only stored in 1 location. -end-user databases: developed by end users at their workstations. Email, internet downloads, docs from word processing stored in these. End users might develop their own apps w/ simple databases.

Role of Information Technology in Business Strategy

Design of info architecture/acq of tech are important aspects of entity's bus strategy. Choices abt tech critical to achieving entity's objs. Tech decisions S/B an input to strategy process to help define innovations and incr rev, rather than an after-the-fact goal achievement tool. Common principles of tech-driven strategy development: -tech is a core input to strategy development (as much as customers/mkts/competitors). -bc of speed that tech changes, strategy developmt must be continual, not revisited every 3-5 yrs. -innovative emerging bus opps must be managed sep/diff from core bus. -tech has power to change long-held bus assumptions. Mgmt/execs must be open to it. -manage tech from 2 perspectives: First, its ability to create innovation in existing bus. Second, the ability of emerging tech to create new mkts/products. -focus S/B on customer priorities as well as internal efficiencies.

Disaster Recovery

Entity's plans for restoring and continuing ops in the event of the destruction of program and data files, as well as processing capability. ST probs/outages aren't disasters. If processing can be quickly reestablished at original processing location, no disaster recovery necessary. If not (prob bc orig site doesn't exist anymore), it is necessary. Major players in DRP are org itself and DR services provider (IBM, SunGard). If app software packages involved, package vendors may be involved. For distributed processing, hardware vendors. Senior mgmt support is needed for effective DRP. Steps in DRP: 1. Assess the risks. 2. Identify mission-critical apps and data. 3. Develop a plan for handling the mission-critical apps. 4. Determine the responsibilities of the personnel involved in disaster recovery. 5. Test the DRP. Depending on org, DRP may be limited to restoring IT processing or may extend to restoring functions in end-user areas (bus continuity). Paper records that might usually be maintained in end-user areas that could be lost in disaster S/B considered in bus continuity. If org doesn't have DR and bus continuity plan and disaster happens, org may be out of bus. Disadv is cost/effort req to establish/maintain a DRP. A split mirror backup method is effective and often used. Can quickly backup large amts of data at a remote location if disaster.

Operational Effectiveness

Evaluating ongoing effectiveness of control policies/procs provides assurance that they're operating as prescribed and achieving intended purpose. Diagnostic control system compares actual to planned performance. Diagnostic controls: designed to achieve efficiency in ops and get the most from resources used. Control effectiveness: principles of control to be applied to systems development and maintenance: -strategic master plan: to align orgs info system w/ bus strategies, multiyr strategic master plan S/B developed and updated annually. Should show projects that need to be completed to achieve LT co goals and address hardware, software, personnel, and insfrastructure reqmts. -data processing schedule: all data processing tasks S/B organized according to data processing schedule. -steering committee: S/B formed to guide/oversee systems development and acq. -system performance measurements: system must be assessed through these for it to be properly evaluated. Common measures are throughput (output per unit of time), utilization (% of time system is productively used), and response time (how long it takes system to respond).

Factors to Consider and Components of B2B

Factors to consider when deciding to engage in e-commerce: -selection of bus model (some may not be proven). -channel conflicts (possibility of stealing bus from existing sales/channels). -legal issues (e-commerce laws still in development). -security (e-commerce and internet usage vulnerable to penetration by outsiders. Security essential). Components of a reasonably simple e-commerce B2B site (selling side): -customer connecting to site through internet -seller's site behind enterprise firewall. -seller's internet commerce center, consisting of order entry system and catalog system w/ product descr and other info on what's for sale and which acts as an interface to customer's brower. -seller's back office systems for inv mgmt, order processing, and fulfillment (shipping/transportation system). -seller's back office accting system. -seller's pmt gateway communicating through internet to validate/auth credit card trans/other pmt methods.

Features and Costs of EDI

Features: 1. allows transmission of e-docs btw comm systems in diff orgs (called trading/bus partners). 2. reduces handling costs and speeds trans processing compared to traditional paper-based processing. To reduce the costs, EDI system must be integrated w/ orgs accting info systems. 3. requires all trans be submitted in a std format. Translation software req to convert trans data from internal data format in sending system to EDI format or vice versa. 4. can be implemented using direct links btw trading partners, communication intermediaries, VANs, or over the internet. VANs are privately owned comm networks that provide additional services beyond std data transmission (incl mailbox where EDI trans are left by 1 trading partner until retrieved by other). Costs: 1. Legal costs: assoc w/ modifying/negotiating trading contracts w/ trading partners and w/ comm providers. 2. Hardware costs: cost of req comm equip, improved servers, modems, routers, etc. 3. Costs of translation software: cost of acq/developing and maintaining it to translate data into specific EDI formats. Can be sig. 4. Costs of data transmission: has been decr, especially w/ EDI through internet. 5. Process reengineering and employee training costs for affected apps: EDI can reduce human effort in doc processing. Main adv of automation occurs when EDI processes are heavily integrated into other apps like inv control, shipping/receiving, and production planning. Changes may req bus process reengineering. 6. Costs assoc w/ security, monitoring, and control procs: EDI systems need monitoring/troubleshooting to make sure working ok. Security must be tight to ensure trans not received from/sent to unauth trading partners. Some controls here can be automated.

Data Capture

First step in processing bus trans is to capture data for each trans and enter it into the system. Usually triggeered by event/trans. Relevant data must be captured for each. In a manual system, source doc is created (invoice, PO). In computerized, info is directly entered. Data capture techniques: 1. Manual entries: input by indiv. Data entry screen often has same name/layout as paper source doc. 2. Source data automation: captures trans data in machine-readable format in time/place of origin. ATMs, POS systems, barcode scanners. Data must be accurate/complete. Ways to ensure this: 1. Well-designed input screens: request all req data and guide person in putting in right data. Validation rules and clear error messages make sure data meets parameters. Maybe drop-down lists. Input masks. Source data automation captures data automatically so less errors. 2. Auto-entry fields: preprinted/prenumbered source docs. Automatic system assignment of sequential nos. These prevent duplication/skipping of doc nos. Ensures all trans recorded and no docs misplaced. Can also do date/time stamps.

Data Encryption

Foundation for e-commerce. Use key to scramble a plaintext/readable message into ciphertext/unreadable message. Intended recipient uses another key to decrypt/decipher cibertext message back to plaintext. The longer the length of the key, the less likely message/trans will be decrypted by wrong person or broken by brute-force attack (where attacker tries every possible key until they find right one). If encrypted content communicated by person/machine using cryptography, sending is entity that encrypts, receiver is entity that decrypts. Btw them is unsecured environment where message travels. When encrypted content is stored, auth users have ability to encrypt/decrypt it to use it for auth purposes. 1. Digital certificates: electronic docs created/digitally signed by trusted party which certify identity of owners of a particular public key. Contains that party's public key. Public key infrastructure (PKI) refers to system/processes used to issue/manage assymetric keys and digital certificates. Org that issues public/private keys and records public key in digital certificate is a certificate authority. Digital certificates are for e-bus use and are issued by certificate authorities like Comodo and VeriSign, which hash the info on the digital certificate and encrypt that hash w/ its private key. Digital signature then appended to digital certificate, providing means for validating its authenticity. 2. Digital signatures v e-signatures: Digital signatures use asymmetric encryption to create legally-binding electronic docs. Web-based e-signatures are an alternative and are provided by vendors as a software product. Basically a cursive imprint of persons name applied to e-doc. Legally binding, as if had "signed" paper copy of doc.

Roles of Business Information Systems

Four primary roles in bus ops: -process detailed data (trans data)- Trans processing systems (TPS). -provide info used for making daily decisions. Tactical. Decision support systems (DSS). -provide info used for developing bus strategies. Strategic. Exec info systems (EIS). -take orders from customers Info system S/B able to capture/process detailed trans data and provide higher level aggregated data for mgmt decisions. Integrated system means less redundancy of data entry/storage. Data entered and available to users who need it, so one system/network fills demands for lower-level users needing detailed trans info and higher-level users needing aggregated info. Diff users have diff responsibilities. From a function perspective, can divide BIS into: -sales and mkting systems -manufacturing/production systems -finance and accting systems -HR systems Can be developed for a part of the bus or can cut across many. Customer relationship mgmt and supply chain mgmt systems may go outside bus to customers and vendors.

Information Technology (IT)

General term encompassing lots of diff computer-related components. One of the most basic/vital IT components of a bus is the set of software called the business info system, which can be divided into trans processing systems, ERP systems, decision support systems (ie bus intelligence systems), and exec info systems. Categories aren't mutually exclusive- bus info system can perform multiple functions. IT components: 1. Hardware: physical comp or peripheral device. PC, mainframe, disk/tape drive, monitor, mouse, printer, scanner, keyboard. 2. Software: systems/programs that process data and turn it into info. Can be for general use by many orgs (microsoft word) or specialized purposes (internal audit program). Can be developed internally or purch as app package from outside vendor. Categories include system software, programming languages, and app software. 3. Network: made up of comm media allowing multiple comps to share data/info. Could be through networking cables, fiber optic cable, microwave, WIFI, satellites. Internet! 4. People: job titles vary widely, but functions standard given particular hardware/software/network configuration. Functions of initial setup, maintenance/support, etc. May outsource functions (ADP). People are weakest link. 5. Data/Information: data is raw facts (quantity/name/amt). Can be production data (live/real. Results from production processing and stored in production systems) or test data (fake. Results from testing and stored in test systems). Production/test data S/B sep stored and accessed. Information is created from data that has been processed and organized. Useful for decision making, data isn't.

Networks

Groups of interconneced comps, terminals, comm channels, comm procesors, and comm software. Components discussed in context of a LAN bc LANs tested a lot in past. Local Area Network (LANs) permit shared resources (software/hardware/data) among comps w/in a limited area. Normally privately owned, meaning they don't use telephone lines or they use private lines leased from telecomm providers. Components of LANs/networks: -node: any device connected to a network. -workstation: node (usually PC) operated by end users. -server: node dedicated to providing services/resources to rest of network. Not usually accessible by indiv users but only through network software. -network interface card (NIC): circuit board installed on a node that allows it to connect w/ and communicate over the network. -transmission media: physical path btw nodes on a network. May be wired (twisted pair which is norm phone wires, coaxial cable which is like TV cable, and fiber optic cable which uses light to transmit signals) or wireless. LAN comm media are normally dedicated lines (used only by the network). Various transmission media have diff transmission capabilities (speed, etc.). -network operating system (NOS): manages comm over network. May be peer-to-peer (all nodes share comm mgmt) or client/server (central machine serves as mediator of network comm). Some common PC NOS are microsoft windows, microsoft NT, and novell netware. -communications devices/modems: provide remote access and provide network w/ ability to comm w/ others. Models translate digital data into analog format needed to use phone lines (phones analog, comps digital). Gateways allow connection of two dissimilar networks (LAN to internet). -comm/network protocols: to transmit info from 1 place to another, telecomm network must establish an interface btw sender/receiver, transmit the info, route messages along the various paths the info might travel (long messages divided into pieces and routed from sender to receiver w/ no assurance that the pieces will take the same route), check for transmission errors, and convert messages from one speed/transmittion format to another. Various pieces of hardware/software perform these functions, all of which comm by adhering to a set of common rules called comm/network protocol. -gateways/routers: gateway is combo of hardware and software that connect diff types of networks by translating them from one set of network protocols to another. Router is used to route packets of data through several interconnected LANs or to a WAN. Bridge is used to connect segments of a LAN which both use same set of network protocols (LANs often divided into segments for better performance or improved managability). Client/server configs: Most LANs/WANs set up as client/server systems. Workstations are clients. Other processors that provide services to the workstations are servers. Typically several diff servers provide diff types of specialized services.

Roles and Responsibilities of IT Professionals

IT professionals incl admins (database, network, web), librarians, comp operators, and developers (systems and apps). Roles/responsibilities defined individually by each org, and job titles/responsibilities can vary widely depending on org needs and preferences of IT mgmt. Roles: 1. System analyst: 2 types. -for internally developed system: works w/ end users to determine system rqmts, designs overall application system, and determines type of network needed. -for purchased system: integrates app w/ existing internal and purch apps. Provides training to end users. 2. Computer programmer: 2 types. -application programmer/software developer/software engineer: responsible for writing and maintaining app programs. Write updates. New ideas for IT industry try to minimize/facilitate program maintenance. For I/C purposes, app programmers shouldn't be given write/update access to data in production systems or unrestricted/uncontrolled access to app program change mgmt systems. -system programmer: responsible for installing, supporting, monitoring, and maintaining op system. May perform capacity planning functions. In complex computing environments, lots of time can be spent testing/applying upgrades. For I/C purposes, system programmers shouldn't be given write/update access to data in production systems or access to change mgmt systems. 3. Computer operator: responsible for scheduling/running processing jobs. Lots of this can be automated and must be in large computing environments bc of volume of info processed. 4. IT supervisor: manage functions/responsibilities of IT dptmt. 5. File librarian: store/protect programs/tapes from damage and unauth use. Control file libraries. Automated in large computing environments. 6. Data librarian: in large cos has custody of/maintains entity's data and ensures that production data is released only to auth indivs when needed. 7. Security admin: responsible for assignment/maintenance of initial passwords (maintenance if end users don't themselves). Responsible for overall op of various security systems and security software in general. 8. System admin: 3 types. -database admin (DBA): responsible for maintaining/supporting database software and performing certain security functions. Perform similar functions for database software as system programmers do for system overall. Diff from data admins! DBA is responsible for actual database software, data admin responsible for definition, planning, and control of data w/in database. -network admin: support comp networks through performance monitoring/troubleshooting. Also called telecommunication analysts and network operators. -web admin: responsible for info on a website. 9. Data input clerk: prepare, verify, and input data to be processed. Function is being distributed to end users. 10. Hardware technician: sets up/configures hardware and troubleshoots resulting hardware probs. 11. End user: Us. Any workers in org who enter data into a system or use info processed by system. Routinely enter much of their own data/trans.

Application Software

Includes diverse group of systems/programs that an org uses to accomplish its objectives. Can be generic (microsoft word) or custom developed. Purchasing software normally means purch license to use the software under certain prescribed terms/conditions, which may be negotiable. Maintenance may/may not be acq along w/ use of software. If app software acq from outside vendor, org acq the app may/may not get access to source code. For large commercial apps, source code may be escrowed w/ escrow agent. Escrow of source code supposedly protects purchaser if outside vendor fails to live up to contractual obligations. Terminology for some types of app software incl groupware (group working software), which allows diff people to work on same docs and coordinate work activities. Useful for less structured work req high knowledge/skill.

Business Process Design

Includes integrated systems, as well as manual and automated interfaces. Categories of BIS: 1. Transaction processing systems (TPS): process/record routine daily trans. Functions usually predefined and highly structured. For high volume, a premium may be placed on speed/efficiency. 2. Management info systems (MIS): reporting. Provides users w/ predefined reports that support effective bus decisions. Reports might provide feedback on daily ops, financial/nonfinancial info to support decision making across functions, and internal/external info. Tactical. 3. Decision support systems (DSS): daily decisions. Extension of MIS that provides interactive tools for decision making. May provide info, facilitate forecasting, or allow modeling of decision aspects. Also called expert system. 4. Executive information systems (EIS): C suite. Provide senior execs w/ immediate/easy access to internal/external info to assist in strategic decision making. Consolidates info internal/external to bus and reports in format/detail approp for execs.

Transaction Cycles

Indiv trans processed/controlled through AIS customized to specific bus rqmts. Sim economic events grouped for repetitive processing into trans cycles. Some industries have customized ones. Figure out which for flowchart Qs. 1. Rev cycle: trans from sales of goods/services producing cash/AR. Often includes customer orders/credit verification, AR, and cash receipts. 2. Exp cycle: trans from purch of goods/services using cash or producing debt. Often includes purch, inv control (WIP), AP, and cash disbursements. 3. Production cycle: trans from conversion of resources (RM or time) into products/services. Often includes product design/production planning, product mfg, and inv control (FG). 4. HR/Payroll cycle: trans from employee admin phases (hiring, determining comp, pmt of employees, benefits admin, termination). Often includes HR (hire, evaluate, benefits), time/attendance, payroll disbursements, and payroll tax reporting. 5. Financing cycle: trans from equity/debt financing incl issuance of stock/debt and pmt of divs/debt service pmts.

Reporting

Info output from BIS usually in form of doc, report, or query response. Reports are for both internal/external users. Types of reports: 1. Periodic scheduled reports: traditional reports displaying info in predefined format and made available regularly to end users. May be auto printed or made available w/ report-viewing software and printed if hard copy needed. Monthly F/S. 2. Exception reports: produced when a specific condition/exception occurs. Specific criteria established and any trans/entity that meet them reported on exception report. All customers in excess of credit limit. 3. Demand reports: Pull/response reports. Available on demand. End user can log into workstation and get response report w/o waiting for scheduled report creation. May be auto printed or available online to print if needed. 4. Ad Hoc reports: One of the most attractive features of MIS is to be able to print these. These are reports that don't currently exist but can be created on demand w/o a software developer/programmer. Also called report writer. A query is a set of criteria that the end user can send to the system to extract all trans/other info that meet the criteria. Some are very structured and don't need much knowledge to use effectively. Others need more knowledge and programming skills to use. 5. Push reports: info is pushed and sent to comp screen. Filed as email attachments or sent through web-based report channel. If a report window displays current reports every time users logs in to report network, push reports. Can be specific internal reports of org, industry reports, or info downloaded/aggregated from internet. End user creates template/profile specifying desired info. Program then searches for content that meets those rqmts and sends it to the user's desktop. 6. Dashboard reports: used by orgs to present summary info needed for mgmt action. If the info indicates activities aren't w/in mgmt risk tolerances/plans, mgmt can take corrective action. These are visual quick references (charts, graphs, etc). Excel can create these, some accting software has a type w/ cash position, exp breakdown, and op data on home page. 7. Extensible Business Reporting Language (XBRL): derived from Extensible Markup Language (XML). Tags define data (meta-data). Indicate taxonomy used, currency, time pd, definition of element. XBRL is open, royalty-free, internet-based info standard for bus reporting of financial data. Tags define data so users can create macros/other programs that can automate analysis of it. Could have macro to download XBRL F/S, identify CA/CL, and calc current ratio. Since tags are standardized, can be confident that current ratios btw cos are comparable.

Programming Languages

Java, COBOL, Pascal, Basic, Visual Basic, C, C++. Allow programmers to write programs in source code. Source code translated/compiled into object code (machine language) which consists of binary digits that make up instructions that the processor recogs and interprets. Programs, if not compiled, may be interpreted. Each line of program code converted into executable code immediately before executed. Programs that are interpreted execute more slowly than the same ones that are compiled bc the compiler is normally able to optimize the compiled code for execution speed. Fourth generation languages enable end users to develop apps w/ little/no tech assistance. Tend to be less procedural than conventional programming languages, so less specification of sequence of steps program must follow. Traditional programming has treated actual software instr and data being processed by program as diff things. Object-oriented programming combines data and specific procs that operate on that data into one thing called an object. Intended to produce reusable code. Java and C++ are obj-oriented. Debugging: process of finding errors in comp programs and fixing them. Commercial products available to help w/ this.

Access Controls

Limit access to program documentation, data files, programs, and hardware to those who require it in performance of their job. Include multilevel security, user ID (usernames), user authorization (PWs), limited access rooms, callbacks on dial-up systems, and the use of file-level access attributes and firewalls. 1. Physical access: physical access to comp rooms S/B limited to comp oprators and IT dptmt personnel. Can accomplish through locked doors and ID cards/keys for entry. Manual key locks on equip as well. ID cards/keys can be stolen. 2. Electronic access: unauth access to data/app programs is a big issue. Bc of many comp-based fraud schemes, more attention given to data access. Data access controls: -user ID cards: these + reg changed PWs are common. App systems also usually have vendor set master PWs, which S/B changed when system is installed. Programmers sometimes make backdoors so that they can access program/system and bypass security mechanisms for easy troubleshooting, etc. S/B eliminated. Other security steps are: (1) disconnect hardware device and deactivate user ID when small no of consecutive failed attempts to access system occurs. (2) req all hardware devices be logged off when not in use or auto log them off when they're inactive for a time pd. (3) utilize PW scanning programs looking for weak/easily guessed PWs. (4) req dual authentication. When log in w/ right PW, send text to phone w/ PW that must be entered to access system. Stops people who got username and PW but aren't actual user. -file-level access attributes:control privileges a user has to a file. Read-only means data can be read but not changed. Write means data can be read or changed. Execute grants ability to execute program. Username used to access file determines access granted. -assignment/maintenance of security levels: restrict functions and program accessibility. -callbacks on dial-up systems: for systems allowing users to access files on remote terminals, system security might req system to auto look up phone no of auth user and call that user back before access allowed. Initial caller would enter user ID and PW, system would call back auth caller at phone no from which they were auth to call. Less common now bc no one uses dial-up. -file attributes: set to restrict writing, reading, or directly privileges for a file. Basic security mechanisms. Same for external/internal labels on magnetic tape volumes and file protection rings (physical rings to insert into magnetic tapes before anthing can be written on them). -firewalls: see next card.

Policies

Most crucial element in corp info security infrastructure. Must be considered long before security tech acq/deployed. Entity's info security policy is a doc that states how org plans to protect tangible/intangible info/assets. Include: -mgmt instructions indicating course of action, guiding principle, or approp procedure. -high-level stmts providing guidance to workers who must make present/future decisions. -generalized rqmts that must be written/comm to certain groups of people inside/outside org. Goal of good info security policy is to req people to protect info, which in turn protects org, employees, and customers. Security policy should seek to secure info in 3 states; stored info, processed info, and transmitted info. Info generally resides in IT systems, paper, or the human brain.

OLRT Processing Methodology

Master files updated as trans are entered. Req RASD. Also called online processing. Online data capture doesn't req RT processing. OLRT processing is fast- no delay, immediate processing method where each trans goes through processing steps (data entry/validation and master file update) before next is processed. Files always current and error detection immediate. OLRT is used whenever its critical to have current info or indiv accts need to be accessed in random order. OLRT systems often req the use of a comp network to permit data entered in many locations to update common set of master files. POS systems use scanners to capture data encoded on bar codes and transmit it over LAN or other network to central database. POS system generally connected to electronic cash register that gives data to central database. Online analytical processing (OLAP) allows end users to get data from a system and perform analysis w/ statistical/graphical tools. Infrared and barcode scanners now handheld and mobile so inv and POS data collected RT. Flatbed scanners w/ doc feeders allow data to be entered into databases more quickly/efficiently than retyping. Optical character recognition (OCR) and handwriting recognition software can convert scanned docs into searchable text. Programs can also now share data by exporting it from one and importing it into another that can interpret it.

Types of Off-Site Locations

Matters how fast! 1. Cold Site: 1-3 days. Off-site location that has electrical connections and physical rqmts for data processing, but no equip. Need 1-3 days to be operational bc equip has to be acq. Orgs w/ this approach usually utilize generic hardware that can be quickly/readily obtained from hardware vendors. Cheapest. 2. Hot Site: few hours (4 or less). Off-site location equipped to take over co's data processing. Backup copies of essential data files/programs may also be maintained at location or at nearby data storage facility. If disaster, orgs personnel need to be shipped to DR facility to load backup data onto standby equip. Most expensive. -most difficult aspect of recovery is often telecomm network. -DR service providers usually have lots of floor space and equip, but not enough if all customers (or just a lot) have a disaster at same time. Amt needed determined on probabilistic basis, so to DR services provider, geographic/industry diversification of customes important. -effective recovery, especially if rapid, often comes from having knowledgeable personnel. 3. Warm site: 1/2 day to 1 day. Compromise. Facility already stocked w/ hardware that it takes to create reasonable facsimile of primary data center. In order to restore org's service, latest backups must be retrieved and delivered to backup site. Then, bare-metal restoration of underlying OS and network must be completed before recovery work can be done. Adv of warm site is restoration can be accomplished in reasonable time. Disadv is still continued cost assoc w/ warm site bc contract must be maintained w/ facility to keep it up-to-date.

Data Processing Cycle Step 2: Data Storage

Methods/media to keep data available for retrieval. Methods are: 1. Journals/ledgers: data first entered into journals (indiv trans) and summarized (acct groups) into ledgers. Audit trails allow summary ledger data to be traced to journals, trans, and source docs. 2. Coding: makes data more accessible and useful. Types: -sequence codes: ensure all trans/docs accted for. No duplicates/gaps in # sequence. Check nos, invoice nos, trans nos. -block codes: use blocks of nos to group sim items. Chart of accts (assets 100-199, etc.). Makes finding relevant accts easier and minimizes data entry errors. -group codes: more info than block codes. W/in acct/item no, diff no groups have diff meanings. FASB codification. 3. Chart of accounts: form of coding. summarizes accting data by ledger classification (A/L, etc.) for financial analysis/presentation. Allows customization of data classification to best meet info rqmts of bus. May allow reports to be created by division/segment/geographic area. Important to consider reporting when designing this bc desired reporting segments need to be identified in coding scheme or might be hard to create reports. Computer storage of data should follow a logical sequence. Definitions: -entity: subj of stored info (employee/customer). -attributes: specific items of int for each entity (pay rate, credit rating, etc.). -field: contains single attribute of the entity. First name, last name, city, etc. Contains single piece of data for efficient searching/sorting. Columns. -record: all attributes abt a single instance of an entity. Row. -data value: contents of fields. Cells. -file: records grouped into these. -master file: sim ledger. Stores cumulative info and relatively perm info (customers, vendors, inv). -trans file: sim journal. Stores indiv trans. -database: interrelated/coordinated files.

Accounting Information Systems (AIS)

Most important bus info system to acctant. Type of MIS (partly TPS partly knowledge system). May be sep systems (modules) for diff accting functions (AR, AP, etc.) or one integrated system doing all functions and culminating in GL and reports. Well designed AIS creates audit trail for accting trans, so can trace trans from source doc to ledger and vouch ledger to source doc. Objectives of AIS: -record valid trans -properly classify trans -record trans at proper value -record trans in proper accting pd (cutoff) -properly present trans/related info in F/S Sequence of events in an AIS: 1. trans data from source doc entered by end user, entered online by customer, or obtained from source data automation. 2. orig paper docs (if any) filed. 3. trans recorded in journal. 4. trans posted to GL and subsidiary ledgers. 5. TBs prepped. 6. adjustmts, accruals, corrections entered. 7. F/S and reports generated.

Risk Assessment and Control Activities

Risk= possibility of harm/loss. Threat= eventuality that reps a danger to an asset or a capability linked to hostile intent. Vulnerability= characteristic of a design, implementation, or op that renders system susceptible to a threat. Safeguards/controls= policies/procs that reduce vulnerabilities when effectively applied. Before risks can be managed, they must be assessed. Risk assessment involves identifying threats, evaluating the probability that they'll occur, evaluating the exposure in terms of potential loss from each threat, identifying controls that could guard against threats, evaluating the costs/benefits of implementing controls, and implementing controls that are cost effective. Controls are always evaled on cost/benefit basis. Access controls, procedural controls, and disaster recovery are important risk mgmt tools.

Wireless Networks

Security risks: many orgs also provide wireless access to the info systems. It's convenient/easy. Ease of access provides a venue for attack and extends perimeter that must be protected. Ex: many cos have experienced security incidents where intruders used a laptop equipped w/ a wireless network interface controller (NIC) to access the corp network while sitting in a car outside the building. Wireless signals can be picked up from miles away. Industry stds provide security approaches based on character of wireless connections. -Wi-Fi: set of stds for wireless LANs. Equip using this std often can interfere w/ microwaves, cordless phones, and other equip using same frequency. Transmission is radio waves through the air. Wi-Fi is a wireless form of Ethernet. -Wi-Fi Protected Access: industry std specifying security mechanisms for WI-Fi. Supersedes previous WEP (wired equivalent privacy) std, which was optional so some orgs were unprotected. Used a 40-bit encryption key and the same one was shared by all users. Wi-Fi protected access std provides for a longer encryption key that is changed periodically. -Bluetooth: popular name for networking std for small personal area networks. Can be used to connect up to 8 devices w/in a 100-meter radius, depending on the power of the transmission area using low-power radio-based comm. Acronym for personal area networks is PAN. -Wireless networking devices: can operate in 2 modes, infrastructure mode and ad hoc mode. Infrastructure mode anticipates network devices comm through an access point. Ad hoc mode anticipates that networking devices are physically close enough to comm w/o access point. -Wireless Action Protocol (WAP): protocol enabling cellphone and other wireless devices to access web-based info/services. -4G: fourth-gen cell networks. 1G networks were in the early 80s and could only be used for voice comm. 2G were in early 90s and were digital w/ better voice quality and global roaming. Could be used for voice/data comm. 3G utilizes packet switching tech for higher speeds and can be used for broadband digital services. 4G is faster than 3G. -Access log: file w/ info about each access to a file/website. Provide some security but only if logs periodically reviewed and unusual activity investigated. -Access point: device that connects wireless comm devices together to form a wireless network. Often called wireless access point (WAP, not to be confused w/ wireless app protocol). Access point normally connects to a wired network but could be as simple as a smart phone serving as a hot spot. W/ the capacity/speed of 4G, users can use the smartphone to connect several wireless devices to single hot spot. Several WAPs can link to form a larger network allowing roaming. WAPs have IP addresses for config/mgmt of network.

Segegation of Duties within IT

Since many trans in IT environment are performed by app software, SOD normally revolves around granting/restricting access to production programs/data. 1. System analysts (system hardware designer) v comp programmers (software designer): system analysists design info system to meet user needs, comp programmers use design to create info system by writing comp programs. Analysts are often in charge of hardware, programmers in charge of app software. If same person in charge of hardware/software, could bypass security systems w/o anyone knowing and steal org info/assets. 2. Comp operators v comp programmers: must be segregated bc person doing both could make unauth and undetected program changes. 3. Security admins v comp operators and comp programmers: secrity admins responsible for restricting access to systems/apps/databases to approp personnel. If security admin were also a programmer/operator of system, could give himself/herself/someone else access to areas they aren't auth to enter. Would allow them to steal org info/assets.

Firewalls

Systems, of hardware/software, of user ID and authetication that prevent unauth users from accessing network resources. Gatekeeper. Isolates private network from public network. Can also be network node used to improve network traffic and set up a boundary that prevents traffic from one network segment from crossing over to another. Single co may have multiple firewalls. Access rules for firewalls must be established/maintained. Firewalls can deter but not completely prevent intrusion from outsiders. They do not prevent/protect against viruses. Traditionally, they have been network firewalls that protected the network as a whole. An application firewall is designed to protect specific app services from attack. Firewall methodologies can be subdivided into several diff categories, and used individually or combined. -packet filtering: examines packets of data as they pass through firewall according to rules that have been established for the source of the data, its destination, and the network ports it was sent from. Simplest firewall config, but can be circumvented by intruder who forges acceptable address (IP spoofing). -circuit level gateways: allow data into a network only when comps inside network request it. -application level gateways: proxies. Examine data coming in to gateway in a more sophisticated way. More secure, but can be slow.

Effects of Internet Evolution on Business Operations and Organizational Cultures

Tech advances have created relatonship btw tech advances and bus ops. 1. Web 2.0: Emerging tech has impacted e-commerce. Internet used to just be doc repository, now users interact w/ websites. -led to wiki, collaborative website where users can browse content as well as add/modify it. Wikipedia. Facebook and Blackboard. -more webpages w/ dynamic content, which links to databases like price lists/product lists. Can be dynamically embedded on web pages w/ XML, w/ data in database sep from web page. Dynamic content is any that changes frequently. 2. Mash-ups: web pages that are collages of others and other info. Google Maps. Allows user to view various sources of info. 3. Web Stores: -stand-alone: not integrated w/ larger accting systems. Typically hosted w/ shopping cart software that manages product catalog, user registrations, orders, email confirmations, etc. Financial reports like order summaries are generated as needed by the software and are then imported into general accting software. -integrated: integrate all major accting functions and web store into single software system. Process web order then automatically update cash/rev accts, handle inv reordering, etc. Treat web store sales like retail store sales. 4. Cloud computing: virtual servers available over internet. Any subscription-based or pay-per-use service extending entity's existing IT capabilities on RT basis over internet. Public cloud sells services to anyone on internet. Private cloud is a private network/data center that provides services to limited no of customers. Cloud computing offers adv of professional mgmt of hardware/software. Cloud providers usually have sophisticated backup procs and high security for customer data. Categories of cloud computing services: (1) Infrastructure-as-a-Service (IaaS): hardware-as-a-service (HaaS). Outsources storage, hardware, services, networking components to customers, usually on a per-use basis. (2) Platform-as-a-Service (PaaS): allows customers to rent virtual servers and related services to develop/test new software apps. (3) Software-as-a-Service (SaaS): method of software distribution where apps are hosted by vendor/service provider and made available to customers over internet. Same as ASP.

The Role of Technology in Information and Communications

Tech plays important role in enabling flow of info in an org, including info relevant to ERM across strategy setting and whole org. Selection of tech to support ERM for an org usually reflects: -entity's approach to ERM and degree of sophistication -types of events affecting entity -entity's overall IT architecture -degree of centralization of supporting tech ERM has several key components that enable an org to identify, assess, and respond to risk. Monitoring risk through control activities and info systems is critical to achievement of objs. In some orgs, info is managed sep by unit/function, while others have integrated systems. As enterprise resource planning (ERP) systems have matured, more cos are moving towards integrated systems that can share info throughout the org. Some orgs have enhanced their tech architectures to allow greater connectivity/useability of data, w incr use of the internet. Web services based info strategies allow RT info capture/maintenance/dist across units and functions. Enhances info mgmt. Event identification is important for ERM. Events could be adverse (risks) or positive (opps). Selection of IT systems/procs are driven by strategies that capitalize on opps and mitigate risks. Various methods of organizing tech resources may be used to accomplish org obj. This is tech architecture. -some cos have customized systems for handing info systems rqmts. Often char by data warehouses to support entity's mgmt. -open architectures utilize tech like XBRL, XML, and web services to facilitate data aggregation, transfer, and connectivity among multiple systems.

Participants in Business Process Design

These people form the project team. 1. Mgmt: one effective way to generate SD support is to send a clear signal from top mgmt that user involvement is important. Mgmts most important role is providing support and encouragement for projects and aligning info systems w/ corp strategies. Ensure team members are given enough time to support/work on project. 2. Accountants: play 3 roles. (1) as AIS users, determine their info needs and system rqmts and communicate those to system developers. (2) as members of proj development team or info systems steering committee, help manage system development. (3) as acctants, take active role in designing system controls and periodically monitoring/testing system to verify controls are implemented and functioning properly. Internal audit. 3. Information Systems Steering Committee: exec level. Also called proj steering committee. Plans/oversees info systems function and address complexities due to functional/divisional boundaries. Usually high level mgmt on committee like controller and systems/user dptmt mgmt. Functions: -setting governing policies for AIS. -ensuring top mgmt participation, guidance, and control. -facilitating coordination and integration of IS activities to incr goal congruence and reduce conflict. 4. Project development team: responsible for successful design/implementation of bus system. Work to ensure technical implementation and user acceptance. Tasks: -monitoring project to ensure timely/cost-effective completion. -managing human element (resistance to change). -frequently communicating w/ users and holding reg meetings to consider ideas and discuss progress so no surprises on project completion. -risk mgmt and escalating issues that can't be resolved w/ team. 5. External parties: major customers or suppliers. Also auditors and governmental entities.

Data Processing Cycle Step 3: Data Processing

Trans processed to keep info current. Functions are what is done to the database. Include: 1. addition: add new records (new employee or inv item). 2. updating: revise master file (change cumulative YTD payroll, change employee status/pay rate). Changes to fields in existing records. 3. deletion: remove records (delete term employees or inv not carried). Must be careful when deleting records bc may be history assoc w/ the items that would be deleted alongside record. Methods are how this is done to the database. 2 types: 1. Batch processing: When trans are recorded but master files only periodically updated (daily). Data entry may be online but processing of trans can be done in batches. 2. Online-Real-Time-Processing (OLRT): trans recorded and master files immediately updated in RT.

Data Processing Cycle Step 1: Data Input

Trans processes divided into 4 functional areas, steps in data processing cycle. Data input: Trans must be captured/gathered and entered into system. Issues include all trans of interest are accted for, all trans are completely accted for in correct accts, and all people originating trans identified. Input verification is tracing data to approp supporting evidence, contr to validation of accuracy/auth of a trans. Done through: -source docs: POs, COs, time sheets. Manual/comp generated. -turnaround docs: these preprint data in machine-readable form. Doc sent to customer w/ invoice/stmt. When customer remits pmt, turnaround doc (remittance advice) included. Helps ensure correct acct is credited w/ pmt. Features of desirable data input procs: -prenumbered source docs (verify completeness of output). -source docs efficiently designed to capture relevant info. -data input verified before system accepts it (reasonableness of hrs worked, availability of inv to ship, etc.).

Batch Processing Methodology

Two methods for performing file maintenance on master files and databases in computerized environment, batch and OLRT. Batch processing: Input docs/trans collected and grouped by type of trans (batches). Batches periodically processed (daily/weekly/monthly, etc.). Can use either sequential storage devices (magnetic tape) or random access storage devices (disks, harddrive). RASD allow things to be moved around, sequential don't. Input may be done online, but processing of trans/updating of master files done in batch processing. There's *always* a time delay btw initiation of trans and time it's processed w/ this method, so accting and master files aren't always current and error detection could be delayed. 2 Steps for batch processing: 1. Create transaction file: batch file. Create by entering necessary data, editing it for completeness/accuracy, and making corrections (latter two are edit process or data validation). 2. Update master file: sort trans files in same order as master file and update relevant records in master file. If master files stored on RASD, sorting is unnecessary but results in more efficient processing. Batch total for trans file is manually calced w/ batch processing and then compared (manually or auto) to comp-generated batch control total. Diff indicates error in accuracy/completeness. -batch total: dollar field totals. -hash total: totals of other random stuff (like customer nos). Meaningless other than to check documents aren't switched, etc. Document counts sim. Batch processing common in traditional systems (payroll/GL) where data doesn't need to constantly be current. System-to-system transfers of data or extracts and transfers of data to update sep data warehouses could be done in batch.

URL

Uniform resource locator. Technical name for web address. Consistently directs user to specific location on web. Components: -transfer protocol: http:// or ftp:// (file transfer protocol). -server: www. -domain name: becker is subdomain name, becker.com is full domain name. -top-level domain: .com,.net,.edu (generic top-level domains). -country: .US, .DE, .FR, .IT (country code top-level domains).

User Access

User accts first target of hackers, so care must be used when designing procs for creating accts/granting access to info. 1. Initial PWs and auth for system access: HR should generate request for user acct and system access rights for new employee. Info security officer may also need to approve acct depending on level of access being granted. 2. Changes in position: req coordination btw HR and IT. It's important to have procs to address changes in jobs/roles and remove access no longer needed. Must be mechanism to disable accts when employee leaves an org. HR should alert IT before termination or ASAP.

Other Definitions

Web server: comp that delivers a web page on request. Each has an IP address. Any comp can be turned into a web server by installing the right software and connecting it to the internet. Web hosting service: org that maintains a no of web servers and provides fee-paying customers w/ the space to maintain their websites. Wi-Fi: set of stds for wireless local area networks (LANs). Wi-Fi Alliance is global nonprofit created in 1999 w/ goal of driving adoption of a single worldwide accepted std for high-speed wireless LANs. Web services: internet protocol for transporting data btw diff apps w/in a cos boundaries or across cos. XML can be used w/ these to produce automated info exchange btw comps and software and automate bus reporting processes.


Kaugnay na mga set ng pag-aaral

Cultural Competence For Physical Therapists

View Set

ECON 100 MIDTERM FALL 2022 AHMAD

View Set

Quality Assurance versus Quality Control

View Set

Module 5: Examining Relationships Checkpoint 2

View Set

EXSC 2150 Human Anatomy Ch 15&16

View Set

18th and 19th century ARTH Exam 3

View Set