Software Development Security

Ace your homework & exams now with Quizwiz!

Which testing involves testing the functionality, performance, and protection of an application after a change takes place?

regression testing

What are tuples?

rows or records in a relational database

What is a trusted front-end as it relates to a database?

a front-end client software that provides security to the database by incorporating security features

Who develops a knowledge-based system (KBS) or expert systems?

a knowledge engineer and domain expert

In which attack does the attacker send spoofed IP packets with the SYN flag set to the target machine on any established open port?

a land attack

Which virus creates many variants by modifying its code to deceive antivirus scanners?

a polymorphic virus

What is aggregation?

a process in which a user collects and combines information from various sources to obtain complete information about an object

What is a smurf attack?

a type of denial-of-service (DoS) attack that uses spoofed broadcast ping messages to flood a target system

What is the final step in authorizing a system for use in an environment?

accreditation

What is the primary function of the Construction Cost Model (COCOMO)?

cost estimation

What does the acronym DDL denote?

data definition language

What does the acronym DBMS denote?

database management system

Which database feature limits user and group access to certain information based on the user privileges and the need to know?

database views

What is the process of removing flaws from software programs during its development process?

debugging

What are the four phases of NIACAP?

definition, verification, validation, and post accreditation

What does the acronym DoS denote?

denial of service

What does the acronym DDoS denote?

distributed denial of service

Which communication mechanism allows direct communications between two applications using interprocess communication (IPC)?

dynamic data exchange (DDE)

Which type of attack enables an intruder to capture and modify data traffic by rerouting the traffic from a network device to the intruder's computer?

network address hijacking

Which technique is used to prevent repetitive information from appearing in a database?

normalization

Which database is designed to handle images, audio, documents, and video?

object-oriented database (OODB)

What does the acronym OLTP denote?

online transaction processing

What does the acronym SDLC denote?

system development life cycle

Which three SQL commands are used to implement access control on database objects?

the GRANT, DENY, and REVOKE commands

Which SQL command is used to retrieve data from a database table?

the SELECT command

How is assurance achieved?

using verification and validation

What is accreditation?

a process that involves a formal acceptance of the product and its responsibility by management

Which type of attack involves flooding a recipient email address with identical e-mails?

a spamming attack

Which attack uses clients, handles, agents, and targets?

a distributed denial of service (DDoS) attack

Which attack is an extension of the denial-of-service (DoS) attack and uses multiple computers?

a distributed denial-of-service (DDoS) attack

What does the acronym CMM denote?

Capability Maturity Model

Which control ensures that valid transactions are processed accurately and only once?

an application control

Which test design typically focuses on testing functional requirements?

black-box testing

Which technique is an expert system processing technique that uses if-then-else rules to obtain more data than is currently available?

forward-chaining

In which two modes does an expert system operate?

forward-chaining and backward-chaining

Which term describes a module's ability to perform its job without using other modules?

high cohesion

In which type of attack is a user connected to a different Web server than the one they intended to connect to?

hyperlink spoofing attack

Is it more difficult to detect malicious code segments in compiled code or in interpreted code?

in compiled code

In which step of a change control process is the change reported to the management?

in the last step

What are the five maturity levels defined by the Capability Maturity Model (CMM)?

initial, repeatable, defined, managed, and optimized

Which computer-aided software engineering (CASE) tool spans the complete life cycle of a software product?

integrated computer-aided software engineering (ICASE)

Which viruses are written in a macro language and typically infect operating systems?

macro viruses

What is a Trojan horse?

malware that is disguised as a useful utility, but is embedded with malicious code to infect computer systems

Which control step in data warehousing ensures that the data is timely and valid?

monitoring the data purging plan

In which phase of the software development cycle is a blueprint of the software product developed on the basis of customer requirements?

prototyping

Which process in the system development life cycle (SDLC) can improve development time and save money by providing a proof of concept?

prototyping

What does the acronym RPC denote?

remote procedure call

Which database operation cancels any database changes from the current transaction and returns the database to its previous state?

rollback

Which activity is considered an integral part of all the phases of the system development life cycle?

security

Which type of integrity ensures that data types and rules are enforced?

semantic integrity

What are the three types of NIACAP accreditation?

site, type, and system

What must a router examine to protect against a mail bomb or spam attack?

the data portion of the packet

What is backward chaining?

the process of beginning with a possible solution and using the knowledge in the knowledge base to justify the solution based on the raw input data

What is a data warehouse?

the process of combining multiple databases together to form a single, large database

What is used by a payroll application program to ensure integrity while recording transactions for an accounting period?

time and date stamps

What is the primary function of data definition language (DDL) in a structured query language (SQL)?

to define the schema of the database

What is the purpose of the middle computer-aided software engineering (CASE) tool?

to develop detailed designs

What is the purpose of Authenticode used by the ActiveX technology of Microsoft?

to enforce security

What is the purpose of sandboxes in Java applets?

to enforce security

What is the purpose of database save points?

to ensure that a database can return to a point before the system crashed and make available the data prior to the database failure

Which database feature ensures that the entire transaction is executed to ensure data integrity?

two-phase commit

Which type of software testing examines the software's internal logical structure?

white-box testing

What is a data dictionary?

a database for system developers

What is an agent in a distributed computing environment?

a program that performs services in one environment on behalf of a principal in another environment

What is a check digit?

a single point of verification in a computerized application

Which component of Dynamic Data Exchange (DDE) enables two applications to share data?

interprocess communications (IPC)

Which database feature involves splitting the database into many parts?

partitioning

What are the seven phases of the system development life cycle?

project initiation, analysis and planning, system design specification, software development, installation and implementation, operations and maintenance, and disposal

What is the primary purpose of Trinoo and Tribal Flood Network (TFN)?

to launch a distributed denial of service (DDoS) attack

What are the two elements of assurance procedures?

verification and validation

What is CAPI?

Cryptographic application programming interface (CAPI) is an application programming interface that provides encryption

What does the acronym ODBC denote?

Open Database Connectivity

What are the steps in the change control process?

1. Make a formal request. 2. Analyze the request. This step includes developing the implementation strategy, calculating the costs of the implementation, and reviewing the security implication of implementing the change. 3. Record the change request. 4. Submit the change request for approval. This step involves getting approval of the actual change once all the work necessary to complete the change has been analyzed. 5. Make changes. The changes are implemented and the version is updated in this step. 6. Submit results to management: In this step, the change results are reported to management for review.

In which cost-estimating technique is each member in the group asked to provide their opinion on a piece of paper in confidence?

Delphi technique

What does the acronym XML denote?

Extensible Markup Language

Which functionality does forward chaining mode provide in an expert system?

It acquires data and comes to a conclusion based on that data.

Which functionality does backward chaining mode provide in an expert system?

It backtracks to determine if a given hypothesis is valid.

What is the purpose of automicity in an online transaction processing (OLTP) environment?

It ensures that only complete transactions take place.

What is the purpose of normalization?

It ensures that the attributes in a table depend only on the primary key.

What is the purpose of risk analysis?

It identifies the potential threats and vulnerabilities based on the environment in which the product will perform data processing, the sensitivity of the data required, and the mechanisms that should be a part of the product as a countermeasure.

What does a tunneling virus do?

It installs itself under the anti-virus system and intercepts any calls that the anti-virus system makes to the operating system.

What should you do to ensure the stability of the test environment?

Separate the test and development environments.

Which variable is used to enhance the database performance by allowing a single statement to execute multiple variables?

a bind variable

Which error occurs when the length of the input data is more than the length that processor buffers can handle?

a buffer overflow

What enables a database to start processing at a designated place when a database detects an error?

a checkpoint

What is a botnet?

a compromised computer in your network that is created by a hacker when malware is copied to the computer , allowing the hacker to take over the computer

Which category of application must remain operational for the organization to survive?

a critical application

Which attack requires that the hacker compromise as many computers as possible to initiate the attack?

a distributed denial of service (DDoS) attack

What is a piece of software code embedded intentionally in the software to trap intruders?

a pseudo flaw

What is a foreign key in a relational database?

a value that exists in a table that matches the value of the primary key on another table

Which malicious software infects the system without relying upon other applications for its execution?

a worm

Which testing ensures that the code meets customer requirements?

acceptance testing

What is a hostile applet?

an active content module used to exploit system resources

What is a spoofing attack?

an attack in which the source IP address in an IP datagram is modified to imitate the IP address of a packet originating from an authorized source

Which procedures ensure that the control mechanisms implement the security policy of an information system by determining the extent to which security policy will be applied to an information system?

assurance procedures

Which property of online transaction processing (OLTP) ensures that the entire transaction is cancelled if one part of the transaction fails?

atomicity

Which technique works backwards by analyzing the list of the goals identified and verifying the availability of data to reach a conclusion on any goal?

backward-chaining

What are two examples of input validation errors?

buffer overflow and boundary condition errors

What error condition arises because data is not checked before input to ensure that it has an appropriate length?

buffer overflow errors

Which database operation saves data that is stored in memory to the database?

checkpoint

Which database operation finalizes any database changes from the current transaction, making the changes available to other users?

commit

Which programs translate programming language into instructions that can be executed by computers?

compilers and interpreters

What does the acronym CASE denote?

computer-aided software engineering

Which type of management is about tracking the actual change?

configuration management

Of which access control mechanism are database views examples?

content-dependent access control

Which file or files could be used to violate user privacy by creating a map of where the user has been on the Internet?

cookies

Which text file contains information regarding the previous HTTP connections and is stored by the Web server on the client's computer hard disk?

cookies

Of what are the Delphi technique, expert judgment, and function points examples?

cost-estimating techniques that are used during the project planning stage

Which database component is responsible for creating and deleting table relationships?

data definition language (DDL)

Which type of software program maintains and provides controlled access to data components stored in rows and columns on a table?

database management system (DBMS)

Which method is used to prevent users with a lower level of access from inferring information of a higher level from the databases?

polyinstantiation

Which type of integrity ensures that each foreign key references a primary key that actually exists?

referential integrity

Which database operation creates a logged point to which the database can be restored?

savepoint

What does the acronym SQL denote?

structured query language

What is unit testing?

the debugging performed by the programmer while coding instructions

What is prototyping?

the process of putting together a working model

What is certification?

the process of technically evaluating and reviewing a product to ensure that it meets the stated security requirements

What is forward chaining?

the reasoning approach that can be used when there is a small number of solutions relative to the number of inputs

Which testing technique focuses only on testing the design and internal logical structure of the software product rather than its functionality?

white-box testing

What is XML?

an interface language used to arrange data so that it can be shared by Web technologies

Your organization has several diskless computer kiosks that boot via optical media located in the office lobby. Recently, users reported that the diskless computers have been infected with a virus. What should you do to ensure the virus is removed? Reboot the server to which the diskless computers connect. Reboot the diskless computers. Remotely launch an anti-virus program on the diskless computers. Launch an anti-virus program on the diskless computers via a USB flash drive.

Answer: Reboot the diskless computers. Explanation: To ensure that a virus is removed from a diskless computer, you should simply reboot the computer. The virus will only exist in the computer's memory because the computer does not have a hard drive or full operating system. None of the other options is correct. The easiest way to remove a virus from a diskless computer is to reboot the computer.

Which statement correctly defines the multipart virus? A multipart virus is coded in macro language. A multipart virus can change some of its characteristics while it replicates. A multipart virus can hide itself from antivirus software by distorting its code. A multipart virus can infect both executable files and boot sectors of hard disk drives.

Answer: A multipart virus can infect both executable files and boot sectors of hard disk drives. Explanation: A multipart virus can infect both executable files and boot sectors of hard disk drives. The multipart virus resides in the memory and then infects boot sectors and executable files of the computer system. Macro viruses are platform independent and are typically used with Microsoft Office products. Macro viruses are programs written in Word Basic, Visual Basic, and VBScript. Macro viruses pose a major threat because the simplicity of the underlying language makes them easy to develop. A stealth virus hides the changes it makes to system files and boot records, making it difficult to detect its presence. A stealth virus maintains a copy of a file before infecting it and presents the original copy to the monitoring software. Therefore, a stealth virus modifies the actual file and makes it difficult to detect the presence of the virus. A self-garbing type of virus can hide itself from antivirus software by distorting its own code. When a self-garbling virus spreads, it jumbles and garbles its own code to prevent the antivirus software from detecting its presence. A small part of a virus code later decodes the jumbled part to obtain and subsequently execute the rest of the virus code. The ability of the self-garbling virus to format its own code makes it difficult for an antivirus to detect its presence. At some point during the patch application process, a file may become infected with a virus. When this is discovered, you will need to recover the file by replacing the existing, infected file with an uninfected backup copy. This may possibly result in an older version of the file being restored that does not have all the patches applied.

During the recent development of a new application, the customer requested a change. You must implement this change according to the change control process. What is the first step you should implement? Record the change request. Analyze the change request. Acquire management approval. Submit the change results to the management.

Answer: Analyze the change request. Explanation: You should analyze the change request. The change control procedures ensure that all modifications are authorized, tested, and recorded. Therefore, these procedures serve the primary aim of auditing and review by the management. The necessary steps in a change control process are as follows: 1. Make a formal request. 2. Analyze the request. This step includes developing the implementation strategy, calculating the costs of the implementation, and reviewing the security implication of implementing the change. 3. Record the change request. 4. Submit the change request for approval. This step involves getting approval of the actual change once all the work necessary to complete the change has been analyzed. 5. Make changes. The changes are implemented and the version is updated in this step. 6. Submit results to management: In this step, the change results are reported to management for review. A stringent change management process ensures that all the changes are implemented and recorded related to production systems, and enforces separation of duties. For instance, in a software development environment, changes made to production software programs are performed by operational staff rather than the software programmers, who are responsible for coding software applications for clients. Such a process ensures that the changes are implemented in the proper manner and the process is documented. Change management is about the decision to make the change. Configuration management is not the same as change management. Configuration management is about tracking the actual change. It is the discipline of identifying the components of a continually evolving system for the purposes of controlling changes to those components and maintaining integrity and traceability throughout the life cycle. Configuration management controls the changes that take place in hardware, software, and operating systems by assuring that only the proposed and approved system changes are implemented. In configuration management, a configuration item is a component whose state is to be recorded and against which changes are to be progressed. In configuration management, a software library is a controlled area accessible only to approved users who are restricted to the use of an approved procedure. Configuration control is controlling changes to the configuration items and issuing versions of configuration items from the software library. Configuration management includes configuration control, configuration status accounting, and configuration audit.

Which statement is true of programming languages? The compiler translates one command at a time. Assemblers translate assembly language into machine language. High cohesion and high coupling represent the best programming. A high-level programming language requires more time to code instructions.

Answer: Assemblers translate assembly language into machine language. Explanation: Assemblers translate assembly language into machine language. Interpreters translate one command at a time. Compilers translate large sections of program instructions. The cohesive module refers to a piece of software code that either does not depend on or depends less on other software modules to be executed. High cohesiveness of a software program represents best programming due to reduced dependency levels. Coupling refers to the level of interconnection required between various software modules in a software program to perform a specific task. A lower coupling indicates lesser dependence on other programs and higher performance. High-level languages require less time to code a program compared to low-level programming languages. This is because high-level languages use objects that act as independent functional modules having a specific functionality and reduce the number of programmers involved in coding application instructions.

You need to view events that are generated based on your auditing settings. Which log in Event Viewer should you view? Application Security System DNS

Answer: Security Explanation: You should view the Security log in Event Viewer to view events that are generated based on your auditing settings. None of the other logs records this information. The Application log contains events logged by applications. The System log contains events logged by computer system components. The DNS log contains events on host name registrations.

Which statement correctly describes Bind variables in structured query language (SQL)? Bind variables implement database security. Bind variables are used to normalize a database. Bind variables are used to replace values in SQL commands. Bind variables are used to enhance the performance of the database.

Answer: Bind variables are used to enhance the performance of the database. Explanation: The Bind variable is used to enhance the database performance by allowing a single statement to execute multiple variables. Bind variables are placeholders for values sent to a database server in a SQL query. Bind variables permit the reuse of previously issued SQL statements by executing a prepared set of instructions with the parameters provided at runtime. Bind variables do not implement security in a database management system. Database security is implemented by data control language (DCL). Bind variables are not used to normalize a database. Database normalization is a process of deleting duplicate data from a database. Normalization ensures that the attributes in a table depend only on the primary key. Bind variables are not used to replace the values in SQL commands. Substitute variables are used to replace values in SQL PLUS commands.

Which tool assists in application development design layout as a part of application development life cycle? Aggregation CASE Delphi Spiral

Answer: CASE Explanation: Computer-aided software engineering (CASE) refers to the use of software tools to assist in the development and maintenance of application software: CASE business and functional analysis, system design, code storage, compilers, translation, and testing software. Middle CASE products are used for developing detailed designs, such as screen and report layouts. Aggregation is a database security concern that arises when a user does not have access to sensitive data, but can access portions of it. This loophole enables a user to aggregate the data, and use it to deduce a fact. Delphi is a technique of expert judgment that ensures each member in a group decision-making process provides an honest opinion on the subject matter in question. Group members are asked to provide their views on the subject in writing. All these papers are collected, and a final decision is taken based on the majority. Delphi technique is generally used either during the risk assessment process or to estimate the cost of a software development project. Spiral is a model based on analyzing the risk, building prototypes, and simulating the application tasks during the various phases of development cycle. The Spiral model is typically a meta model that incorporates a number of software development models. The basic concept of the Spiral model is based on the Waterfall model. Software engineering is defined as the science and art of specifying, designing, implementing, and evolving programs, documentation, and operating procedures whereby computers can be made useful to man.

Which statement correctly defines dynamic data exchange (DDE)? DDE is an interface to link information between various databases. DDE allows multiple applications to share and exchange the same set of data. DDE is a graphical technique that is used to track the progress of a project over a period of time. DDE is a software interface that enables communication between an application and a database.

Answer: DDE allows multiple applications to share and exchange the same set of data. Explanation: The dynamic data exchange (DDE) process enables direct communication between two applications using interprocess communications (IPC). Based on the client/server model, DDE allows two programs to exchange commands between themselves. The source of the data is referred to as the server, and the system accessing the data is referred to as the client. In the client/server model, the server is the data storage resource and is responsible for data backups and protection/maintenance of the database. The client in the client/server model provides an extensive

Which spyware technique inserts a dynamic link library into a running process's memory? SMTP open relay DLL injection buffer overflow cookies

Answer: DLL injection Explanation: DLL injection is a spyware technique that inserts a dynamic link library (DLL) into a running process's memory. Windows was designed to use DLL injection to make programming easier for developers. Some of the standard defenses against DLL injection include application and operating system patches, firewalls, and intrusion detection systems. SMTP open relay is an e-mail feature that allows any Internet user to send e-mail messages through the SMTP server. SMTP relay often results in an increased amount of spam. SMTP relay is designed into many e-mail servers to allow them to forward e-mail to other e-mail servers. Buffer overflow occurs when the length of the input data is longer than the length processor buffers can handle. Buffer overflow is caused when input data is not verified for appropriate length at the time of the input. Insufficient bounds checking causes buffer overflows. Buffer overflow and boundary condition errors are examples of input validation errors. Cookies store information on a Web client for future sessions with a Web server. It is used to provide a persistent, customized Web experience for each visit and to track a user's browser habits. The information stored in a cookie is not typically encrypted and might be vulnerable to hacker attacks.

You need to view events on host name registrations. Which log in Event Viewer should you view? Application Security System DNS

Answer: DNS Explanation: You should use the DNS log in Event Viewer to view events on host name registrations. You should log DNS entries so that you can watch for unauthorized DNS clients or servers. Without a DNS log, you would be unable to discover how long an entry was being used. None of the other logs will contain this type of information. The Application log contains events logged by applications. The Security log contains events based on the auditing configuration. Only administrators can configure and view auditing. The System log contains events logged by computer system components. Auditing deters perpetrators' attempts to bypass the system protection mechanisms, reviews patterns of access to individual objects, and discovers when a user assumes a functionality with privileges greater than his own.

Which type of virus installs itself under the anti-virus system and intercepts any calls that the anti-virus system makes to the operating system? script virus meme virus boot sector virus tunneling virus

Answer: tunneling virus Explanation: A tunneling virus installs itself under the anti-virus system and intercepts any calls that the anti-virus system makes to the operating system. A script virus includes lines of instructions that are written in a scripting language, like VBScript or Jscript. The code in the script carries out malicious activities, such as copying itself to everyone in your contact list. A meme virus is not really a virus. Any e-mail message that is continually forwarded around the Internet is considered a meme virus. While meme viruses do not truly "infect" a system, they cause performance degradation just from the number of times they are forwarded. A boot sector virus infects the boot sector of a hard drive. They usually move data within the boot sector or overwrite the sector with new information.

Your company implements several databases. You are concerned with the security of the data in the databases. Which statement is correct for database security? Data identification language implements security on data components. Data manipulation language (DML) implements access control through authorization. Bind variables provide access control through implementing granular restrictions. Data control language (DCL) implements security through access control and granular restrictions.

Answer: Data control language (DCL) implements security through access control and granular restrictions. Explanation: Data control language (DCL) manages the access control to records in a database. DCL defines the granular user permissions to various data objects and implements database security. Examples of database control language commands are grant, deny, and implementing granular permissions for different users and groups. For example, John might have read access to file A, and Matt might have the read and update access to the same file. Granular security is based on the need to know and least privileges. Typically, the job responsibilities and roles of users determine their access to various data components. Data identification language is an invalid option because it is not a valid database management system language. Data manipulation language (DML) does not provide access control through authorization. DML refers to a suite of computer languages used by database users to retrieve, insert, delete, and update data in a database. DML provides users the ability to store, retrieve, and manipulate the data according to the instructions issued. Bind variables do not provide access control. Bind variables are used to enhance the database performance by allowing a single statement to execute multiple variables. Bind variables are placeholders for values sent to a database server in a SQL query. They permit the reuse of previously issued SQL statements by executing a prepared set of instructions with the parameters provided at runtime. Bind variables do not implement security in a database management system.

Which statement best describes data normalization? Data normalization assists in implementing polyinstantiation. Data normalization improves the efficiency and performance of a database. Data normalization implements data fragmentation and provides faster access. Data normalization ensures that attributes in a database table depend on the primary key.

Answer: Data normalization ensures that attributes in a database table depend on the primary key. Explanation: Data normalization ensures that attributes in a database table depend only on the primary key. Normalization is required to prevent repetitive information from appearing in a database. This makes the database consistent and easy to maintain. Normalization is the process of eliminating redundant data from a relational database management system and storing the data at a single location. Normalization provides links to the data components whenever needed. The issues involved in normalization of a database are as follows: - Segregating related groups into separate tables. - Deleting redundant data from all the tables in a database. - Ensuring that there is only one primary key per table and that all the attributes can be referred by using this primary key. The process of normalization should be carried out carefully because dividing the data into multiple tables can break the consistency in retrieving information from the database and result in performance degradation. Database denormalization is the process of adding the redundant information to the tables to optimize the consistency and the performance of the database. Data denormalization is diametrically opposite to the data normalization process. The purpose of data denormalization is to increase the processing efficiency. Polyinstantiation is a method used to ensure that users with lower access level are not able to access and modify data categorized for a higher level of access in a multi-level database. When polyinstantiation is implemented, two objects are created by using the same primary keys. One object is filled with incorrect information and is deemed unclassified, and the other object contains the original classified information. When a user with lower level privileges attempts to access the object, the user is directed to the object containing incorrect information. Polyinstantiation is used to conceal classified information that exists in a database and to fool intruders.

You have configured auditing for several security events on your Windows Server 2003 network. The Event Viewer logs are backed up on a daily basis. You need to ensure that security events are only cleared manually by an administrator. What should you do? Enable the Do not overwrite events option for the Security log. Enable the Do not overwrite events option for the Application log. Configure the Maximum event log size option for the Security log. Configure the Maximum event log size option for the Application log.

Answer: Enable the Do not overwrite events option for the Security log. Explanation: You should enable the Do not overwrite events option for the Security log. The Security log of Event Viewer contains all security events based on your auditing configuration. The Do not overwrite events option configures the log so that events are only configured manually by an administrator. You should not enable the Do not overwrite events option for the Application log. The Application log does not contain auditing-related events. You should not configure the Maximum event log size option for the Security log. The Maximum event log size option configures the maximum size of the event log. It does not ensure that security events are only cleared manually by an administrator. You should not configure the Maximum event log size option for the Application log. The Application log does not contain auditing-related events.

As a security administrator, you have recently learned of an issue with the Web-based administrative interface of your Web server. You want to provide a countermeasure to prevent attacks via the administrative interface. All of the following are countermeasures to use in this scenario, EXCEPT: Remove the administrative interfaces from the Web server. Use a stronger authentication technique on the Web server. Control which systems are allowed to connect to and administer the Web server. Hard-code the authentication credentials into the administrative interface links.

Answer: Hard-code the authentication credentials into the administrative interface links. Explanation: You should NOT hard-code the authentication credentials into the administrative interface links. This would make it much easier for an attacker to access your system because the administrative credentials are included. You should also disable the "remember password" option. To provide a countermeasure to prevent attacks via the Web-based administrative interface, you could implement any of the following measures: - Remove the administrative interface from the Web server. - Use a stronger authentication technique on the Web server. - Control which systems are allowed to connect to and administer the Web server.

All of the following are countermeasures for session management attacks, EXCEPT: Implement randomized session IDs. Implement time stamps or time-based validation. Implement pre- and post-validation controls. Encrypt cookies that include information about the state of the connection.

Answer: Implement pre- and post-validation controls. Explanation: You should not implement pre- and post-validation controls as a countermeasure for session management attacks. Pre- and post-validation controls are countermeasures to use in parameter validation attacks. Countermeasures for session management attacks include the following: - Implement randomized session IDs. - Implement time stamps or time-based validation. - Encrypt cookies that include information about the state of the connection.

Which statement is true of network address hijacking? It is used for identifying the topology of the target network. It uses ICMP echo messages to identify the systems and services that are up and running. It allows the attacker to reroute data traffic from a network device to a personal computer. It involves flooding the target system with malformed fragmented packets to disrupt operations.

Answer: It allows the attacker to reroute data traffic from a network device to a personal computer. Explanation: Network address hijacking allows an attacker to reroute data traffic from a network device to a personal computer. Also referred to as session hijacking, network address hijacking enables an attacker to capture and analyze the data addressed to a target system. This allows an attacker to gain access to critical resources and user credentials, such as passwords, and to critical systems of an organization. Session hijacking involves assuming control of an existing connection after the user has successfully created an authenticated session. A scanning attack is used to identify the topology of the target network. Also referred to as network reconnaissance, scanning involves identifying the systems that are up and running on the target network and verifying the ports that are open, the services that a system is hosting, the type of operating system, and the applications running on a target host. Scanning is the initial process of gathering information about a network to find out vulnerabilities and exploits before an actual attempt to commit a security breach takes place. A smurf attack uses ICMP echo messages to identify the systems and services that are up and running. It is a denial-of-service (DoS) attack that uses spoofed broadcast ping messages to flood a target system. In a smurf attack, the attacker sends a large amount of ICMP echo packets with spoofed sources IP address as that of the target host to IP broadcast addresses. This results in the target host being flooded with echo replies from the entire network, causing the system to either freeze or crash. Ping of death, bonk, and fraggle are other examples of DoS attacks. In a teardrop attack, the attacker uses a series of IP fragmented packets, causing the system to either freeze or crash while the target host is reassembling the packets. A teardrop attack is primarily based on the fragmentation implementation of IP. To reassemble the fragments in the original packet at the destination, the host looks for incoming packets to ensure that they belong to the same original packet. The packets are malformed. Therefore, the process of reassembling the packets causes the system to either freeze or crash.

Which function is provided by remote procedure call (RPC)? It identifies components within a distributed computing environment (DCE). It provides code that can be transmitted across a network and executed remotely. It provides an integrated file system that all users in the distributed environment can share. It allows the execution of individual routines on remote computers across a network.

Answer: It allows the execution of individual routines on remote computers across a network. Explanation: Remote procedure call (RPC) allows the execution of individual routines on remote computers across a network. It is used in a distributed computing environment (DCE). Globally unique identifiers (GUIDs) and universal unique identifiers (UUIDs) are used to identity components within a DCE. They uniquely identify users, resources, and other components in the environment. A UUID is used in a Distributed Computing Environment. Mobile code is code that can be transmitted across a network and executed remotely. Java and ActiveX code downloaded into a Web browser from the World Wide Web (WWW) are examples of mobile code. A distributed file service (DFS) provides an integrated file system that all users in the distributed environment can share. A directory service ensures that services are made available only to properly designated entities.

Which statement correctly defines the object-oriented database model? It logically interconnects remotely located databases. It is a hybrid between relational and object-based databases. The relationship between data elements is in the form of a logical tree. It can store data that includes multimedia clips, images, video, and graphics.

Answer: It can store data that includes multimedia clips, images, video, and graphics. Explanation: An object-oriented database is used to store multiple types of data, such as images, audio, video, and documents. The data elements and the different components are referred to as objects. These objects are used to create dynamic data components. In an object-oriented database, the objects can be dynamically created according to the requirements and the instructions executed. The object-oriented model provides ease of reusing code, analyses, and reduced maintenance. The distributed database model implies multiple databases that are situated at remote locations and are logically connected. In a distributed database model, databases are logically connected to each other to ensure that the transition from one database to another is transparent to the users. The logically connected databases appear as a single database to the users. The distributed database model allows different databases situated at remote locations to be managed individually by different database administrators. This database model provides scalability features, such as load balancing and fault tolerance. A hybrid between an object-oriented based database and a relational database is known as an object-relational database. This type of database inherits properties from both relational and object-oriented databases. An objectrelational database allows developers to integrate the database with their own custom data types and methods. In a hierarchical database, the data is organized in a logical tree structure rather than by using rows and columns. Records and fields are related to each other in a parent-child tree structure. A hierarchical database tree structure can have branches and leaves where leaves are the data fields and the data is accessed through well-defined access paths by using record groups that act as branches. A hierarchical database is used where one to many relationships exist.

Which statement correctly describes a Trojan horse? It is a social engineering technique. To be executed, it depends upon other programs. It embeds malicious code within useful utilities. It modifies IP addresses in an IP packet to imitate an authorized source.

Answer: It embeds malicious code within useful utilities. Explanation: A Trojan horse is a malware that is disguised as a useful utility but is embedded with malicious code. When the disguised utility is run, the Trojan horse carries out malicious operations in the background and provides the useful utility on the front end. Trojan horses use covert channels to carry out malicious operations. Malicious activities may include deleting system files or planting a backdoor into a system for later access. Trojan horses are typically installed as a rogue application in the background to avoid suspicion by the user. A Trojan horse is not a social engineering technique. Social engineering involves tricking another person into sharing confidential information by posing as an authorized individual. Social engineering is a non-technical intrusion that relies heavily on human interaction and typically involves tricking other people into break normal security procedures. To be executed, a Trojan horse does not depend on other application programs. A virus is the malicious software (malware) that relies upon other application programs to execute and infect a system. The main criterion for classifying a piece of executable code as a virus is that it spreads itself through applications running on a host system. A virus infects an application by replicating itself. A Trojan horse does not modify the IP address in an IP packet to imitate an authorized source. IP spoofing refers to modification of a source IP address in an IP datagram to imitate the IP address of the packet originating from an authorized source. This results in the target computer setting up communication with the attacker's computer. This process provides access to restricted resources in the target computer. In a spoofing attack, which is also referred to as a masquerading attack, a person or program is able to masquerade successfully as another person or program. A man-in-the-middle attack is an example of a spoofing as well as a session hijacking attack. Other types of spoofing attacks are e-mail spoofing and Web spoofing.

Which statement correctly defines an application control? It is a mechanism to control user access to resources. It determines controls that are functioning within an operating system. It ensures that valid transactions are processed accurately and only once. It ensures that a system performs with a high throughput without any time lag.

Answer: It ensures that valid transactions are processed accurately and only once. Explanation: Application controls ensure that valid transactions are processed accurately and only once. If there is any problem during transaction, the whole transaction is rolled back. Application controls define the procedures used for user data input, processing, and resultant data output. They operate on the input to a computing system, on the data being processed, and on the output of the system. An example of an input control is a customer providing bank account credentials before making transactions. Output controls can be printed copies of sensitive documents assigned to users after the authorization process. Output controls are implemented to protect the output's confidentiality. Incorrect values can cause mistakes in data processing and be evidence of fraud. Application controls are transparent to front-end applications. An application control involves adding security features during the development of the application. The security features of an operating system might be of little use in the event of existing vulnerabilities on the application. Implementing security as a part of applications controls results in cheaper and effective security because you do not need to make many changes to the code later. Detective application controls include cyclic redundancy checks, structured walkthroughs, and hash totals. Access controls define the mechanism and the need for limiting the access to resources only to authorized users. Review of software controls determines how the controls are functioning within an operating system. Application controls implement and monitor the flow of information at each stage of data processing and do not affect application throughput and performance. They limit end users of applications in such a way that only particular screens are visible. Particular uses of the application can be recorded for audit purposes.

You are the security administrator for your organization. A user in the IT department informs you that a print server was recently the victim of a teardrop attack. Which statement correctly defines the attack that has occurred? It involves the use of invalid packets that have the same source and destination addresses. It floods the target host with spoofed SYN packets and causes the host to either freeze or crash. It involves the use of malformed fragmented packets and causes the target system to either freeze or crash. It involves taking advantage of the oversized ICMP packets and causing the system to either freeze or crash.

Answer: It involves the use of malformed fragmented packets and causes the target system to either freeze or crash. Explanation: In a teardrop attack, the attacker uses a series of fragmented Internet Protocol (IP) packets and causes the system to either freeze or crash while the packets are being reassembled by the target host. A teardrop attack is primarily based on the fragmentation implementation of IP. To reassemble the fragments in the original packet at the destination, the host seeks incoming packets to ensure that they belong to the same original packet. The packets are malformed. Therefore, the process of reassembling the packets causes the system to either freeze or crash. In a land attack, invalid packets having the same source and destination addresses are used. A land attack involves sending a spoofed TCP SYN packet with the target host's IP address and an open port serving as the source and destination both to the target host on an open port. The land attack causes the system to freeze or crash because the machine continuously replies to itself. In a SYN flood attack, the attacker floods the target with spoofed IP packets and causes it to either freeze or crash. The Transmission Control Protocol (TCP) uses the synchronize (SYN) and acknowledgment (ACK) packets to establish communication between two host computers. The exchange of the SYN, SYN-ACK, and ACK packets between two host computers is referred to as handshaking. The attackers flood the target computers with a series of SYN packets to which the target host computer replies. The target host computer then allocates resources to establish a connection. The IP address is spoofed. Therefore, the target host computer never receives a valid response in the form of ACK packets from the attacking computer. When the target computer receives many such SYN packets, it runs out of resources to establish a connection with the legitimate users and becomes unreachable for processing of valid requests. In a denial-of-service (DoS) attack, the target computer is flooded with numerous oversized Internet Control Message Protocol (ICMP) or User Datagram Protocol (UDP) packets. These packets, which either consume the bandwidth of the target network or overload the computational resources of the target system, cause loss of network connectivity and services. Ping of death, smurf, bonk, and fraggle are examples of DoS attacks.

Which statement correctly defines the Capability Maturity Model in the context of software development? It is a formal model based on the capacity of an organization to cater to projects. It is a model based on conducting reviews and documenting the reviews in each phase of the software development cycle. It is a model that describes the principles, procedures, and practices that should be followed in the software development cycle. It is a model based on analyzing the risk and building prototypes and simulations during the various phases of the software development cycle.

Answer: It is a model that describes the principles, procedures, and practices that should be followed in the software development cycle. Explanation: The Capability Maturity Model (CMM) describes the principles, procedures, and practices that should be followed by an organization in a software development life cycle. The capability maturity model defines guidelines and best practices to implement a standardized approach for developing applications and software programs. It is based on the premise that the quality of a software product is a direct function of the quality of its associated software development and maintenance processes. This model allows a software development team to follow standard and controlled procedures, ensuring better quality and reducing the effort and expense of a software development life cycle. The CMM builds a framework for the analysis of gaps and enables a software development organization to constantly improve their processes. A software process is a set of activities, methods, and practices that are used to develop and maintain software and associated products. Software process capability is a means of predicting the outcome of the next software project conducted by an organization. Based on the level of formalization of the life cycle process, the five maturity levels defined by the CMM are as follows: - Initial: The development procedures are not organized, and the quality of the product is not assured at this level. - Repeatable: The development process involves formal management control, proper change control, and quality assurance implemented while developing applications. - Defined: Formal procedures for software development are defined and implemented at this level. This category also provides the ability to improve the process. - Managed: This procedure involves gathering data and performing an analysis. Formal procedures are established, and a qualitative analysis is conducted to analyze gaps by using the metrics at this level. - Optimized: The organization implements process improvement plans and lays out procedures and budgets. Other software development models are as follows: - The Cleanroom model follows well-defined formal procedures for development and testing of software. The Cleanroom model calls for strict testing procedures and is often used for critical applications that should be certified. - The Waterfall model is based on proper reviews and the documenting of reviews at each phase of the software development cycle. This model divides the software development cycle into phases. Proper review and documentation must be completed before moving on to the next phase. - The Spiral model is based on analyzing the risk, building prototypes, and simulating the application tasks during the various phases of development cycle. The Spiral model is typically a metamodel that incorporates a number of software development models. For example, the basic concept of the Spiral model is based on the Waterfall model. The Spiral model depicts a spiral that incorporates various phases of software development. In the Spiral model, the radial dimension represents cumulative cost.

Which statement is true of a check digit? It is a binary digit. It consists of multiple decimal digits. It has a fixed size for all data packets. It is a point of verification in a computerized application.

Answer: It is a point of verification in a computerized application. Explanation: A check digit is a single point of verification in a computerized application. A check digit is a decimal value, not a binary digit. A check digit consists of a single decimal value calculated from other digits of the data packet. A check digit is used for error detection. A check digit calculation is based on the content of the data packet; therefore, it varies in size based on the packet size.

Which interface language is an application programming interface (API) that can be configured to allow any application to query databases? JDBC XML OLE DB ODBC

Answer: ODBC Explanation: Open Database Connectivity (ODBC) is an application programming interface (API) that can be configured to allow any application to query databases. The application communicates with the ODBC. The ODBC translates the application's request into database commands. The ODBC retrieves the appropriate database driver. Java Database Connectivity (JBDC) is an API that allows a Java application to communicate with a database. Extensible Markup Language (XML) is a standard for arranging data so that it can be shared by Web technologies. Object Linking and Embedding Database (OLE DB) is a method of linking data from different databases together.

Which database interface language is a replacement for Open Database Connectivity (ODBC) and can only be used by Microsoft Windows clients? OLE DB ADO JDBC XML

Answer: OLE DB Explanation: Object Linking and Embedding Database (OLE DB) is the database interface language that is a replacement for ODBC and can only be used by Microsoft Windows clients. OLE is the Common Object Model (COM) that supports the exchange of objects among programs. A COM allows two software components to communicate with each other independent of their operating systems and languages of implementation. ActiveX Data Objects (ADO) is a set of ODBC interfaces that allow client applications to access back-end database systems. A developer will use ADO to access OLE DB servers. ADO can be used by many different types of clients. Java Database Connectivity (JDBC) allows a Java application to communicate with a database through OBDC or directly. Instead of using ODBC, it uses Java database applications. Extensible Markup Language (XML) structures data so that it can be shared easily over the Internet. Web browsers are designed to interpret the XML tags.

Your organization has a fault-tolerant, clustered database that maintains sales records. Which transactional technique is used in this environment? OLTP OLE DB ODBC data warehousing

Answer: OLTP Explanation: Online transaction processing (OLTP) is used in this environment. OLTP is a transactional technique used when a fault-tolerant, clustered database exists. OLTP balances transactional requests and distributes them among the different servers based on transaction load. OLTP uses a two-phase commit to ensure that all the databases in the cluster contain the same data. Object Linking and Embedding Database (OLE DB) is a method of linking data from different databases together. Open Database Connectivity (ODBC) is an application programming interface (API) that can be configured to allow any application to query databases. Data warehousing is a technique whereby data from several databases is combined into a large database for retrieval and analysis.

Your company has an online transaction processing (OLTP) environment for customers. Management is concerned with the atomicity of the OLTP environment in a 24/7 environment. Which statement correctly defines management's concern? Transactions occur in isolation and do not interact with other transactions until a transaction is over. Only complete transactions take place. If any part of a transaction fails, the changes made to a database are rolled back. The changes are committed only if the transaction is verified on all the systems, and the database cannot be rolled back after committing the changes. Transactions are consistent throughout the different databases.

Answer: Only complete transactions take place. If any part of a transaction fails, the changes made to a database are rolled back. Explanation: With respect to online transaction processing (OLTP), atomicity ensures that only complete transactions take place. If any part of a transaction fails, the changes made to a database are rolled back. Management is not concerned that transactions take place in isolation. Isolation ensures that the transactions do not interact with other transactions until a transaction is over. Management is not concerned that changes are committed only if the transaction is verified on all the systems. Durability ensures that the changes are committed only if the transaction is verified on all the systems. The database cannot be rolled back after the changes are committed. Management is not concerned that transactions are consistent throughout the different databases. Consistency ensures the accuracy and synchronization of transactions in databases throughout the organization's network. Consistency is especially important for financial data because the information regarding a bank account should be consistent throughout the databases. OLTP is typically used when multiple database systems are clustered. OLTP transactions are recorded and committed in real time. The primary purpose of OLTP is to provide resiliency and a high level of performance. OLTP monitoring and resolving transaction-related problems constitute an ACID test. The characteristics of ACID are atomicity, consistency, isolation, and durability. In a database deploying the ACID test, all modifications should be completed for a single transaction to be recorded. If a subtask fails, the entire transaction fails, and the database is rolled back and the transaction is written to a log report and reviewed for analysis.

Which statements are true regarding software process assessments? (Choose all that apply.) They determine the state of an organization's current software process and are used to gain support from within the organization for a software process improvement program. They identify contractors who are qualified to develop software or to monitor the state of the software process in a current software project. They develop an action plan for continuous process improvement. They develop a risk profile for source selection

Answer: They determine the state of an organization's current software process and are used to gain support from within the organization for a software process improvement program. They develop an action plan for continuous process improvement. Explanation: Software process assessments determine the state of an organization's current software process and are used to gain support from within the organization for a software process improvement program. In addition, they develop an action plan for continuous process improvement. Software capability evaluations identify contractors who are qualified to develop software or to monitor the state of the software process in a current software project. In addition, they develop a risk profile for source selection.

Which statement is true of a software development life cycle? Unit testing should be performed by the developer and the quality assurance team. A software programmer should be the only person to develop the software, test it, and submit it to production Parallel testing verifies whether more than one system is available for redundancy. Workload testing should be performed while designing the functional requirements.

Answer: Unit testing should be performed by the developer and the quality assurance team. Explanation: Unit testing should be performed by the developer and by the quality assurance team. Unit testing refers to the debugging performed by the programmer while coding instructions. The unit testing should check the validity of the data format, length, and values. After writing the instructions, the developer might run tools to detect errors. A software programmer should not be the only person to develop the software, test it, and submit it to production. Therefore, distinction of duties ensures checks by using formal procedures adopted by the quality assurance team. After the software program is submitted, it is again verified by the quality assurance team by using formal procedures and practices before sending it to the program library. Parallel testing is the process of feeding test data into two systems, which are the altered system and another alternative system, and comparing the results. The original system can serve as the alternative system. It is important to perform testing by using live workloads to observe the performance and the bottlenecks present in the actual production environment. Parallel testing ensures that the system fulfills the defined business requirements. It does not involve testing for redundancy. Designing the functional requirements is a part of the system design specifications stage and does not involve workload testing.

You are deploying anti-virus software on your organization's network. All of the following are guidelines regarding anti-virus software, EXCEPT: Install anti-virus software on all server computers, client computers, network entry points, and mobile devices. Configure anti-virus scans to occur automatically on a defined schedule. Configure the anti-virus software to automatically scan external disks. Update anti-virus signatures via a local server.

Answer: Update anti-virus signatures via a local server. Explanation: You should not update anti-virus signatures via a local server. Anti-virus signatures can be updated either via the software vendor's server or a local server. You would need to determine which configuration will be best for your organization. It is important to ensure that updates take place automatically. Some guidelines regarding anti-virus software include the following: - Install anti-virus software on all server computers, client computers, network entry points, and mobile devices. - Configure anti-virus scans to occur automatically on a defined scheduled. - Configure the anti-virus software to automatically scan external disks. - Configure anti-virus updates to occur automatically. - Develop a virus eradication process in case of infection. - Scan all backup files for viruses. - Develop and periodically update an anti-virus policy.

You need to format data from your database so that it can be easily displayed using Web technologies. Which interface language should you use? JDBC OLE DB ADO XML

Answer: XML Explanation: You should use extensible markup language (XML). XML is an interface language used to arrange data so that it can be shared by Web technologies. This flexible language can be used to arrange the data into a variety offormats using tags. Java Database Connectivity (JBDC) is an application programming interface (API) that allows a Java application to communicate with a database. Object Linking and Embedding Database (OLE DB) is a method of linking data from different databases. ActiveX Data Objects (ADO) is an API that allows ActiveX programs to query databases.

Which type of malicious code is hidden inside an otherwise benign program when the program is written? a Trojan horse a virus a worm a logic bomb

Answer: a Trojan horse Explanation: A Trojan horse is a type of malicious code that is embedded in an otherwise benign program when the program is written. A Trojan horse is typically designed to do something destructive when the infected program is started. Trojan horses, viruses, worms, and logic bombs are all examples of digital pests. Software development companies should consider reviewing code to ensure that malicious code is not included in their products. A virus is added to a program file after a program is written. A virus is often associated with malicious programs that are distributed in e-mail messages. A worm creates copies of itself on other computers through network connections. A logic bomb is designed to initiate destructive behavior in response to a particular event. For example, a logic bomb might be programmed to erase a hard disk after 12 days.

Which type of malicious attack uses Visual Basic scripting? a social engineering attack a dumpster diving attack a Trojan horse attack a denial of service attack

Answer: a Trojan horse attack Explanation: Visual Basic scripting (VBS) is typically used to code Trojan horses. Trojan horses are digital pests hidden in seemingly benign programs. Trojan horses are designed to perform malicious actions, such as retrieve passwords or erase files. VBS programs typically have .vbs file name extensions, and are often transmitted through e-mail messages. An administrator can protect a network from VBS scripts by preventing e-mail clients from either downloading or running attachments with .vbs file name extensions. A social engineering attack occurs when a hacker poses as a company employee or contractor to gain information about a network from legitimate company employees. A hacker typically uses social engineering to gain user names and passwords or sensitive documents by non-technical means, such as posing as an employee or dumpster diving. A denial of service (DoS) attack occurs when a hacker floods a network with requests so that legitimate users cannot gain access to resources on a computer or a network.

Your organization includes an Active Directory domain with three domain controllers. Users are members or organizational units (OUs) that are based on departmental membership. Which type of database model is used in the domain? a relational database model a hierarchical database model an object-oriented database model an object-relational database model

Answer: a hierarchical database model Explanation: An Active Directory domain, which uses the Lightweight Directory Access Protocol (LDAP), is a hierarchical database model. A hierarchical database model uses a logical tree structure. LDAP is the most common implementation of a hierarchical database model. A relational database model is not used in the scenario. A relational database model uses rows and columns to arrange data and presents data in tables. The fundamental entity in a relational database is the relation. Relational databases are the most popular. Microsoft's SQL Server is a relational database. An object-oriented database model is not used in this scenario. An object-oriented database (OODB) model can store graphical, audio, and video data. A popular object-oriented database is db4objects from Versant Corporation. An object-relational database model is not used in this scenario. An object-relational database is a relational database with a software front end written in an object-oriented programming language. Oracle 11g is an objectrelation database. Another type of database model is the network database model. This database model expands the hierarchical database model. A network database model allows a child record to have more than one parent, while a hierarchical database model allows each child to have only one parent.

What is an agent in a distributed computing environment? a program that performs services in one environment on behalf of a principal in another environment an identifier used to uniquely identify users, resources, and components within an environment a protocol that encodes messages in a Web service setup the middleware that establishes the relationship between objects in a client/server environment

Answer: a program that performs services in one environment on behalf of a principal in another environment Explanation: In a distributed computing environment, an agent is a program that performs services in one environment on behalf of a principal in another environment. A globally unique identifier (GUID) and a universal unique identifier (UUID) uniquely identify users, resources, and components within a Distributed Component Object Model (DCOM) or Distributed Computer Environment (DCE) environment, respectively. Simple Object Access Protocol (SOAP) is an XML-based protocol that encodes messages in a Web service setup. Object request brokers (ORBs) are the middleware that establishes the relationship between objects in a client/server environment. A standard that uses ORB to implement exchanges among objects in a heterogeneous, distributed environment is Common Object Request Broker Architecture (CORBA). A distributed object model that has similarities to CORBA is DCOM. The Object Request Architecture (ORA) is a high-level framework for a distributed environment. It consists of ORBs, object services, application objects, and common facilities. The following are characteristics of a distributed data processing (DDP) approach: - It consists of multiple processing locations that can provide alternatives for computing in the event that a site becomes inoperative. - Distances from a user to a processing resource are transparent to the user. - Data stored at multiple, geographically separate locations is easily available to the user.

Which statement most correctly defines a database management system (DBMS)? a software program that enables database design and implementation a central repository of data elements and their respective relationships a suite of software programs providing access to data and implementing permissions on data components an application programming interface used to provide connectivity between database and applications

Answer: a suite of software programs providing access to data and implementing permissions on data components Explanation: A database management system (DBMS) DBMS refers to a suite of software programs that maintains and provides controlled access to data components stored in rows and columns on a table. The collection of interrelated tables is known as a database. A database administrator manages a database and monitors the limitations applied on the data components by using database security mechanisms, such as database views and access control lists. The users access the database through a client software by using a structured query language to obtain the data output. A database can use data definition language (DDL), data manipulation language (DML), and query language (QL) to extract data according to user instructions. A DBMS is not related to the design and implementation of the database. A DBMS design and implementation are the instructions embedded during the course of database development lifecycle. A DBMS enables a database administrator to effectively manage information in a database. A database administrator can provide granular permissions to different users of which some users may be able to read the data components and some others might have the privileges to modify the data and implement access control. A central repository of the data components and their interrelationships is referred to as a data dictionary and not as a DBMS. A data dictionary is a database for system developers. An application programming interface (API) provides remote or local connectivity between database systems and Web applications. An API, such as Open Database Connectivity (ODBC), can act as an interface between the database and the data components stored in other applications.

Which malicious software relies upon other applications to execute and infect the system? a virus a worm a logic bomb a Trojan horse

Answer: a virus Explanation: A virus is malicious software (malware) that relies upon other application programs to execute and infect a system. The main criterion for classifying a piece of executable code as a virus is that it spreads itself by means of hosts. The hosts could be any application on the system. A virus infects a system by replicating itself through application hosts. The different types of viruses are as follows: - Stealth virus: It hides the changes it makes as it replicates. - Self-garbling virus: It formats its own code to prevent antivirus software from detecting it. - Polymorphic virus: It can produce multiple operational copies of itself. - Multipart virus: It can infect system files and boot sectors of a computer system. - Macro virus: It generally infects the system by attaching itself to MS-Office applications. - Boot sector virus: It infects the master boot record of the system and is spread via infected floppy disks. - Compression virus: It decompresses itself upon execution but otherwise resides normally in a system. The standard security best practices for mitigating risks from malicious programs, such as viruses, worms and Trojans, include implementation of antivirus software, use of host-based intrusion detection system, and imposition of limits on the sharing and execution of programs. A worm does not require the support of application programs to be executed; it is a self-contained program capable of executing and replicating on its own without the help of system applications or resources. Typically, a worm is spread by e-mails, transmission control protocols (TCPs), and disk drives. A logic bomb malware is similar to a time bomb that is executed at a specific time on a specific date. A logic bomb implies a dormant program that is triggered following a specific action by the user or after a certain interval of time. The primary difference between logic bombs, viruses, and worms is that a logic bomb is triggered when specific conditions are met. A Trojan horse is malware that is disguised as a useful utility, but embeds malicious codes within itself. When the disguised utility is run, the Trojan horse performs malicious activities in the background, such as deleting system files and planting a backdoor into a system and provides a useful utility at the front end. Trojan horses use covert channels to perform malicious activities.

You have just discovered that an application that your company purchased is intentionally embedded with software code that allows a developer to bypass the regular access and authentication mechanisms. Which software code is being described? logic bomb pseudo-flaw multipart virus debugging hooks

Answer: debugging hooks Explanation: A debugging or maintenance hook is software code that is intentionally embedded in the software during its development process to allow the developer to bypass the regular access and authentication mechanisms. These hooks can pose a threat to the security of the software and can be exploited if any maintenance hook is not removed before the software goes into production and an intruder is able to find the maintenance hook. A logic bomb implies a malicious program that remains dormant and is triggered following a specific action by the user or after a certain time interval. The primary difference between logic bombs, viruses, and worms is that a logic bomb is triggered when specific conditions are met. A pseudo-flaw refers to vulnerability code embedded intentionally in the software to trap intruders. A multipart virus can infect both executable files and boot sectors of hard disk drives. The virus first resides in the memory and then infects the boot sector and the executable files of the computer.

Your organization has recently implemented an artificial neural network (ANN). The ANN enabled the network to make decisions based on the experience provided to them. Which characteristic of the ANN is described? adaptability fault tolerance neural integrity retention capability

Answer: adaptability Explanation: Adaptability is the artificial neural network (ANN) characteristic that is described. Adaptability refers to the ability of an ANN to arrive at decisions based on the learning process that uses the inputs provided. It is important to note that the ability of ANN learning is limited to the experience provided to them. An ANN is an adaptive system that changes its structure based on either external or internal information that flows through the network by applying the if-then-else rules. ANNs are computers systems where the system simulates the working of a human brain. A human brain can contain billions of neurons performing complex operations. An ANN can also contain a large number of small computational units that are called upon to perform a required task. A neural network learns by using various algorithms to adjust the weights applied to the data. The equation Z = f [wn in ], where Z is the output, wn are weighting functions, and in is a set of inputs scientifically describes a neural network. Fault tolerance refers to the ability to combat threats of design reliability and continuous availability. ANNs do not provide fault tolerance. Retention capability and neural integrity are generic terms and are invalid options.

A user in your organization has been disseminating payroll information on several coworkers. Although she has not been given direct access to this data, she was able to determine this information based on some database views to which she has access. Which term is used for the condition that has occurred? save point aggregation polyinstantiation data scavenging

Answer: aggregation Explanation: The condition that has occurred is aggregation. Aggregation is a process in which a user collects and combines information from various sources to obtain complete information. The individual parts of information are at the correct sensitivity, but the combined information is not. A user can combine information available at a lower privilege, thereby deducing the information at a higher privilege level. A similar threat arises in inference attacks, where the subject deduces the complete information about an object from the bits of information collected through aggregation. Therefore, inference is the ability of a subject to derive implicit information. A protection mechanism to limit inferencing of information in statistical database queries is specifying a minimum query set size, but prohibiting the querying of all but one of the records in the database. The condition that has occurred is not a save point. A save point is not a database security feature but a data integrity and availability feature. Save points are used to ensure that a database can return to a point before the system crashed and make available the data prior to the database failure. Save points can be initiated either by a scheduled time interval or on the activity performed by a user while processing data. The condition that has occurred is not polyinstantiation. Polyinstantiation, also known as data contamination, is used to conceal classified information that exists in a database and to fool intruders. Polyinstantiation ensures that users with lower access level are not able to access and modify data categorized for a higher level of access in a multi-level database. Polyinstantiation can be used to reduce data inference violations. When polyinstantiation is implemented, two objects are created by using the same primary keys. One object is filled with incorrect information and is deemed unclassified, and the other object contains the original classified information. When a user with lower level privileges attempts to access the object, the user is directed to the object containing incorrect information. Polyinstantiation is concerned with the same primary key existing at different classification levels in the same database. The condition that has occurred is not scavenging. Scavenging, also referred to as browsing, involves looking for information without knowing its format. Scavenging is searching the data residue in a system to gain unauthorized knowledge of sensitive data.

What is the best description of CAPI? an application programming interface that uses Kerberos an application programming interface that provides accountability an application programming interface that uses two-factor authentication an application programming interface that provides encryption

Answer: an application programming interface that provides encryption Explanation: Cryptographic application programming interface (CAPI) is an application programming interface that provides encryption. None of the other options is a description of CAPI.

Which program translates one line of a code at a time instead of an entire section of a code? a compiler an interpreter an assembler an abstractor

Answer: an interpreter Explanation: Interpreters translate one line of code at a time, whereas compliers compile a section of code at a time. Compilers and interpreters are programs that translate programming language into instructions that can be executed by computers. Compilers and interpreters are specific to processor platforms used by the computer. Therefore, the code running on a computer whose processor platform is supplied by a vendor may not run on the computer whose processor platform is supplied by another vendor. Compared to interpreted code, compiled code is prone to viruses and malicious codes. Malicious code, which can pose a security threat, can be inserted within the compiled code. An assembler is a program that takes a program written in assembly language as the input and translates it into the machine code. Abstractor is a general term and is an invalid option.

What is the process of ensuring the corporate security policies are carried out consistently? social engineering auditing footprinting scanning

Answer: auditing Explanation: Auditing is the process of ensuring the corporate security policies are carried out consistently. Social engineering is an attack that deceives others to obtain legitimate information about networks and computer systems. Footprinting is the process of identifying the network and its security configuration. Scanning is the process that hackers use to identify how a network is configured

You have discovered that 25% of your organization's computers have been attacked. As a result, these computers were used as part of a distributed denial of service (DDoS) attack. To what classification or area do the compromised computers belong? DMZ VPN botnet honeypot

Answer: botnet Explanation: The compromised computers are members of a botnet. A botnet is created by a hacker when malware is copied to a computer in your network that allows the hacker to take over the computer. Botnets are often used to carry out distributed denial of service (DDoS) attacks. A demilitarized zone (DMZ) is a protected area of a local network that contains publically accessible computers. Botnets can be located anywhere on your network. A virtual private network (VPN) is a secure, private connection through a public network or the Internet. Botnets can be located anywhere on your network. A honeypot is a computer that is set up on an organization's network to act as a diversion for attackers. Often, honeypots are left open in such a way to ensure that they are attacked instead of the more important systems.

Recently, your company's file server was the victim of a hacker attack. After researching the attack, you discover that multiple computers were used to implement the attack, which eventually caused the file server to overload. Which attack occurred? land attack ping of death attack denial-of-service (DoS) attack distributed denial-of-service (DDoS) attack

Answer: distributed denial-of-service (DDoS) attack Explanation: A distributed denial-of-service (DDoS) attack occurred. A DDoS attack is an extension of the denial-of-service (DoS) attack. In DDoS, the attacker uses multiple computers to target a critical server and deny access to the legitimate users. The primary components of a DDoS attack are the client, the masters or handlers, the slaves, and the target system. The initial phase of the DDoS attack involves using numerous computers and planting backdoors that are controlled by master controllers and referred to as slaves. Handlers are the systems that instruct the slaves to launch an attack against a target host. Slaves are typically systems that have been compromised through backdoors, such as Trojans, and are not aware of their participation in the attack. Masters or handlers are systems on which the attacker has been able to gain administrative access. The primary problem with DDoS is that it addresses the issues related to the availability of critical resources instead of confidentiality and integrity issues. Therefore, it is difficult to address the issues by using security technologies, such as SSL and PKI. Launching a traditional DoS attack might not disrupt a critical server operation. Launching a DDoS attack can bring down the critical server because the server is being overwhelmed with the processing of multiple requests until it ceases to be functional. Stacheldraht, trinoo, and tribal flow network (TFN) are examples of DDoS tools. A land attack involves sending a spoofed TCP SYN packet with the target host's IP address and an open port as both the source and the destination to the target host on an open port. The land attack causes the system to either freeze or crash because the computer continuously replies to itself. A ping of death is another type of DoS attack that involves flooding target computers with oversized packets, exceeding the acceptable size during the process of reassembly, and causing the target computer to either freeze or crash. Other denial of service attacks named, smurf and fraggle, deny access to legitimate users by causing a system to either freeze or crash. A DoS attack is an attack on a computer system or network that causes loss of service to users. The DoS attack floods the target system with unwanted requests. It causes the loss of network connectivity and services by consuming the bandwidth of the target network or overloading the computational resources of the target system. The primary difference between DoS and DDoS is that in DoS, a particular port or service is targeted by a single system and in DDoS, the same process is accomplished by multiple computers. There are other types of denial of service attacks such as buffer overflows, where a process attempts to store more data in a buffer than amount of memory allocated for it, causing the system to freeze or crash.

How does an ActiveX component enforce security? by using sandboxes by using object codes by using macro languages by using Authenticode

Answer: by using Authenticode Explanation: Authenticode is used by the ActiveX technology of Microsoft to enforce security. ActiveX refers to a set of controls that users can download in the form of a plug-in to enhance a feature of an application. The primary difference between Java applets and ActiveX controls is that the ActiveX controls are downloaded subject to acceptance by a user. The ActiveX trust certificate also states the source of the plug-in signatures of the ActiveX modules. Java applets use sandboxes to enforce security. A sandbox is a security scheme that prevents Java applets from accessing unauthorized areas on a user's computer. This mechanism protects the system from malicious software, such as hostile applets, by enforcing the execution of the application within the sandbox and preventing access to the system resources outside the sandbox. The Java Security Model (JSM) protects the users from harsh mobile code. When a user accesses a Web page through a browser, class files for an applet are downloaded automatically, even from untrusted sources. To counter this possible threat, Java provides a customizable sandbox to which the applet's execution is confined. This sandbox provides such protections as preventing reading and writing to a local disk, prohibiting the creation of a new process, preventing the establishment of a network connection to a new host, and preventing the loading of a new dynamic library and directly calling a native method. The sandbox security features are designed into the Java Virtual Machine (JVM). These features are implemented through array bounds checking, structured memory access, type-safe reference cast checking, checking for null references, and automatic garbage collection. These checks are designed to limit memory accesses to safe, structured operations. A hostile applet is an active content module used to exploit system resources. Hostile applets coded in Java can pose a security threat to computer systems if the executables are downloaded from unauthorized sources. Hostile applets may disrupt the computer system operation either through resource consumption or through the use of covert channels. Object code refers to a version of a computer program that is compiled before it is ready to run in a computer. The application software on a system is typically in the form of compiled object codes and does not include the source code. Object codes are not related to the security aspects of Java. They represent an application program after the compilation process. Macro programs use macro language for the automation of common user tasks. Macro languages, such as Visual Basic, are typically used to automate the tasks and activities of users. Macro programs have their own set of security vulnerabilities, such as macro viruses, but are not related to Java security. Java applets are short programs that use the technique of a sandbox to limit the applet's access to specific resources stored in the system.

An organization's Web site includes several Java applets. The Java applets include a security feature that limits the applet's access to certain areas of the Web user's system. How does it do this? by using sandboxes by using object codes by using macro languages by using digital and trusted certificates

Answer: by using sandboxes Explanation: Java applets use sandboxes to enforce security. A sandbox is a security scheme that prevents Java applets from accessing unauthorized areas on a user's computer. This mechanism protects the system from malicious software, such as hostile applets, by enforcing the execution of the application within the sandbox and preventing access to the system resources outside the sandbox. A hostile applet is an active content module used to exploit system resources. Hostile applets coded in Java can pose a security threat to computer systems if the executables are downloaded from unauthorized sources. Hostile applets may disrupt the computer system operation either through resource consumption or through the use of covert channels. Object code refers to a version of a computer program that is compiled before it is ready to run in a computer. The application software on a system is typically in the form of compiled object codes and does not include the source code. Object codes are not related to the security aspects of Java. They represent an application program after thecompilation process. Macro programs use macro language for the automation of common user tasks. Macro languages, such as Visual Basic, are typically used to automate the tasks and activities of users. Macro programs have their own set of security vulnerabilities, such as macro viruses, but are not related to Java security. Digital and trust certificates are used by the ActiveX technology of Microsoft to enforce security. ActiveX refers to a set of controls that users can download in the form of a plug-in to enhance a feature of an application. The primary difference between Java applets and ActiveX controls is that the ActiveX controls are downloaded subject to acceptance by a user. The ActiveX trust certificate also states the source of the plug-in signatures of the ActiveX modules. Java applets are short programs that use the technique of a sandbox to limit the applet's access to specific resources stored in the system.

Which type of virus is specifically designed to take advantage of the extension search order of an operating system? boot sector replication companion nonresident resident

Answer: companion Explanation: A companion virus is specifically designed to take advantage of the extension search order of an operating system. In Microsoft Windows, the extension search order is .com, .exe, then .bat. For example, when a user starts a program named calc on a Windows operating system, Windows first looks for a program named calc.com in the current folder. If a virus is named calc.com, and the actual program file is named calc.exe, then the virus will be started instead of the calc.exe program because Windows will stop searching after it finds calc.com. A resident virus is loaded into memory and infects other programs as they in turn are loaded into memory. A nonresident virus is part of an executable program file on a disk and infects other programs when the infected program file is started. A boot sector replicating virus is written to the boot sector of a hard disk on a computer and is loaded into memory each time a computer is started.

What is the primary function of COCOMO? time estimation risk estimation cost estimation threat analysis

Answer: cost estimation Explanation: The primary function of the Construction Cost Model (COCOMO) is cost estimation. The basic version of COCOMO estimates software development effort and cost as a function of the size of the software product in source instructions. COCOMO is all about estimating the costs associated with software development. It does not primarily provide time estimation, risk estimation, or threat analysis.

Recently, an attacker injected malicious code into a Web application on your organization's Web site. Which type of attack did your organization experience? cross-site scripting buffer overflow SQL injection path traversal

Answer: cross-site scripting Explanation: Your organization experienced a cross-site scripting (XSS) attack. A XSS attack occurs when an attacker locates a vulnerability on a Web site that allows the attacker to inject malicious code into a Web application. A buffer overflow occurs when an invalid amount of input is written to the buffer area. A SQL injection occurs when an attacker inputs actual database commands into the database input fields instead of the valid input. Path traversal occurs when the ../ characters are entered into the URL to traverse directories that are not supposed to be available from the Web. Some possible countermeasures to input validation attacks include the following: - Filter out all known malicious requests. - Validate all information coming from the client, both at the client level and at the server level. - Implement a security policy that includes parameter checking in all Web applications.

Which statements are NOT valid regarding SQL commands? a. An ADD statement is used to add new rows to a table. b. A DELETE statement is used to delete rows from a table. c. A REPLACE statement is used to replace rows to a table. d. A SELECT statement is used to retrieve rows from a table. e. A GRANT statement is used to grant permissions to a user. option a option b option c option d option e options a and c only options b, d, and e only all of the options

Answer: options a and c only Explanation: The statements regarding an ADD statement and a REPLACE statement are NOT valid regarding SQL commands. The SELECT, DELETE, and GRANT are valid SQL commands. The SELECT statement is used to retrieve rows from a table, the DELETE statement is used to delete rows from a table, and the GRANT statement is used to grant permissions to a user. The REPLACE and ADD statements are not valid SQL statements. The UPDATE statement is used to either replace or update rows to a table. An INSERT statement is used to add rows to a table.

Your organization uses a relational database to store customer contact information. You need to modify the schema of the relational database. Which component identifies this information? query language (QL) data control language (DCL) data definition language (DDL) data manipulation language (DML)

Answer: data definition language (DDL) Explanation: The data definition language (DDL) identifies the schema of the database. The schema of a database defines the type of data that the database can store and manipulate. The schema is the description of a relational database. The schema also defines the properties of the type of data that a database can store as valid data objects. DDL is also used to create and delete views and relations between tables. The query language (QL) is used to generate a request for information from a user in the form of query statements to obtain relevant output. The data control language (DCL) manages access control to records in a database. DCL defines the granular user permissions to various data objects and implements database security. Examples of DCL commands are grant, deny, revoke, delete, update, and read. The data manipulation language (DML) refers to a suite of computer languages used by database users to retrieve, insert, delete, and update data in a database. DML provides users the ability to store, retrieve, and manipulate the data according to the relevant instructions issued.

What is the process of combining multiple databases to form a single database? metadata data mine data warehouse knowledge base

Answer: data warehouse Explanation: A data warehouse is the process of combining multiple databases together to form a single large database. A data warehouse is a subject-oriented, integrated, time-variant, non-volatile collection of data in support of management's decision-making process. Merged data can be used for information retrieval and data analysis. This mechanism allows users to query a single repository of information instead of multiple databases. Data warehousing does not address information archiving only but also focuses on presenting the information in a useful and understandable way to database users. This is done by merging related data components to provide a broad picture of the information. Data warehouses include mechanisms that ensure appropriate collection, management, and use of data. A data warehouse usually implements controls to prevent the metadata from being used interactively by unauthorized users. Data is reconciled as it is moved between the operations environment and the data warehouse. Any data purging operations that occur are monitored. Data storage information architecture manages collection of data but does not manage archiving of data. Metadata is the useful information extracted from the existing database by using data mining techniques. Metadata provides an insight into data relationships. Metadata is the result of new correlations between data components. This result is based on user instructions. Data mining techniques are used to extract new information from the existing information. Data mining techniques are useful in areas, such as a credit bureau where the credit history of an individual is monitored prior to the approval of a loan. It is important to note that a data warehouse combines multiple databases but does not support interrelationship of data components. Knowledge base refers to collection of facts, rules, and procedures and is not related to data warehouse.

Your organization has several databases. Each database is used for a specific purpose within your organization. Management has decided to combine the databases into a single large database for data analysis. What is this process called? partitioning metadata data mining data warehousing

Answer: data warehousing Explanation: Data warehousing is the process of combining databases into a single large database for analysis. Partitioning is the process dividing a database into parts to provide higher security. Metadata is the data that you obtain when analyzing a data warehouse. Basically, data goes into a data warehouse and metadata (data about the data) comes out. Data mining is the process of using tools to analyze data warehouse data to discover trends and relationships.

You need to ensure that a set of users can access information regarding departmental expenses. However, each user should only be able to view the expenses for the department in which they work. Senior managers should be able to view the expenses for all departments. Which database security feature provides this granular access control? save point partitioning database view noise and perturbation

Answer: database view Explanation: The database security feature that provides this granular access control are database views. Database views are used to limit user and group access to certain information based on the user privileges and the need to know. Views can be used to restrict information based on group membership, user rights, and security labels. Views implement least privilege and need-to-know and provide content-dependent access restrictions. Views do not provide referential integrity, which is provided by constraints or rules. A save point does not provide granular access control. Save points ensure data integrity and availability but are not a database security feature. Save points are used to ensure that a database can return to a point when the system crashes. This further ensures the availability of the data prior to the database failure. Save points can be initiated either at a scheduled time or by a user action during data processing. Database integrity can also be provided through the implementation of referential integrity, where all the foreign keys reference the existing primary keys to identify the resource records in a table. Referential integrity requires that for any foreign key attribute, the referenced relation must have a tuple with the same value for its primary key. Partitioning does not provide granular access control. Partitioning is another protection technique of ensuring database security. Partitioning involves splitting the database into many parts. Partitioning makes it difficult for an intruder to collect and combine confidential information and deduce relevant facts. Noise and perturbation does not provide granular access control. The noise and perturbation technique deploys the insertion of bogus data to mislead attackers and protect database confidentiality and integrity. The noise and perturbation technique involves inserting randomized bogus information along with valid records of the database. This technique alters the data but allows the users to access relevant information from the database. This technique creates enough confusion for the attacker to extract confidential information. Database views are an example of content-dependent access control in which the access control is based on the sensitivity of information and the user privileges granted. This leads to a higher overhead in terms of processing because the data is granularly controlled by the content and the privileges of users. Database views can limit user access to portions of data instead of to the entire database. For example, during database processing in an organization, a department manager might have access only to the data of employees belonging to that department.

A team has been hired to develop a new software package for your company. Which process ensures error-free software? debugging abstraction compilation polymorphism

Answer: debugging Explanation: Debugging ensures error-free software. Debugging is a process of removing bugs from the software code instructions. A software bug is defined as a flaw in the initial coding process. Debugging starts after the coding is complete and before the testing is done. Debugging continues throughout the testing of the software. After the software program is submitted, it is again verified by the quality assurance team by using formal procedures and practices. Debugging programs ensure that program coding flaws are detected and corrected. Abstraction is an object-oriented programming (OOP) concept and involves concealing unnecessary information to highlight either important information or properties for analysis. Abstraction highlights the conceptual aspects and properties of an application to understand the information flow and hides the small, redundant pieces of information to provide a broader picture. Compilation is the process of converting software codes written in high-level languages into a machine language. Compliers, interpreters, and assemblers are software used to compile software codes. Polymorphism refers to an object-oriented programming (OOP) concept and implies that different objects can provide different output based on the same input. This is achieved due to the difference in the functional properties of objects, where each object is supposed to perform a specific subtask in the overall process. Polymorphism denotes that objects of many different classes are related by some common superclass; thus, any object denoted by this name can respond to some common set of operations in a different way.

As a security measure, you implement time and date stamps in a payroll application program while transactions are being recorded. What is the role of this security measure? ensuring integrity ensuring signature ensuring consistency maintaining application checkpoints

Answer: ensuring integrity Explanation: The role of this security measure is to ensure integrity. A payroll application program can ensure integrity by using time and data stamps while recording transactions occurring in the appropriate accounting period. Timestamps prove useful in logging events occurring in a particular time period. Timestamps enable synchronization of database transactions throughout an organization's network. The primary purpose of using time and date stamp is to ensure that transactions are recorded in real time and that the transactions are recorded in an appropriate accounting period. In the event of a distributed database environment, time and date stamps ensure that transactions are recorded in a synchronized manner and are relevant to the entire database cluster throughout the network. For example, if a user is accessing database A to check the balance of the account for which the transaction was made on database B, A should reflect the time period in which the transaction actually took place. Logging events by using time and date stamps in a database ensures data integrity for transactions that occur over a period of time by providing audit trails. Logging of timestamps includes the consistency aspect of ACID as a part of Online Transaction Processing (OLTP). All the other options are invalid in terms of an application payroll program.

Which malware component ensures that the malware removes itself after the application has been executed? insertion avoidance eradication replication trigger payload

Answer: eradication Explanation: The eradication component ensures that the malware removes itself after the application has been executed. The insertion component installs the malware on the victim's system. An avoidance component enables the malware to avoid detection using different methods. A replication component enables the malware to replicate itself and spread to other victims. A trigger component waits for a particular event to execute the malware's payload. A payload component carries out the actual malware's actions.

A hacker has used a design flaw in an application to obtain unauthorized access to the application. Which type of attack has occurred? backdoor escalation of privileges buffer overflow maintenance hook

Answer: escalation of privileges Explanation: An escalation of privileges attack occurs when an attacker has used a design flaw in an application to obtain unauthorized access to the application. There are two type of privilege escalation: vertical and horizontal. With vertical privilege escalation, the attacker obtains higher privileges by performing operations that allow the attacker to run unauthorized code. With horizontal privilege escalation, the attacker obtains the same level of permissions as he already has but uses a different user account to do so. A backdoor is a term for lines of code that are inserted into an application to allow developers to enter the application and bypass the security mechanisms. Backdoors are also referred to as maintenance hooks. A buffer overflow occurs when an application erroneously allows an invalid amount of input in the buffer.

Your company has purchased an expert system that uses if-then-else reasoning to obtain more data than is currently available. Which expert system processing technique is being implemented? Spiral model Waterfall model forward-chaining technique backward-chaining technique

Answer: forward-chaining technique Explanation: The expert system processing technique that is being implemented is the forward-chaining technique. The forward-chaining technique is an expert system processing technique that uses if-then-else rules to obtain more data than is currently available. An expert system consists of a knowledge base and adaptive algorithms that are used to solve complex problems and to provide flexibility in decision-making approaches. An expert system exhibits reasoning similar to that of humans knowledgeable in a particular field to solve a problem in that field. The Spiral model is a software model that is based is on analyzing the risk and building the prototypes and the simulation during the various phases of the development cycle. The Waterfall model is a software model that is based on proper reviews and on documenting the reviews at each phase of the software development cycle. This model divides the software development cycle into phases. Proper review and documentation must be completed before moving on to the next phase. The modified Waterfall model was reinterpreted to have phases end at project milestones. Incremental development is a refinement to the basic Waterfall Model that states that software should be developed in increments of functional capability. Backward chaining works backwards by analyzing the list of the goals identified and verifying the availability of data to reach a conclusion on any goal. Backward chaining starts with the goals and looks for the data that justifies the goal by applying if-then-else rules. Expert systems or decision support systems use artificial intelligence to extract new information from a set of information. An expert system operates in two modes: forward chaining and backward chaining. Backward chaining is the process of beginning with a possible solution and using the knowledge in the knowledge base to justify the solution based on the raw input data. Forward chaining is the reasoning approach that can be used when there are a small number of solutions relative to the number of inputs. The input data is used to reason forward to prove that one of the possible solutions in a small solution set is the correct one. Knowledge-based system (KBS) or expert systems include the knowledge base, inference engine, and interface between the user and the system. A knowledge engineer and domain expert develops a KBS or expert system. Expert systems are used to automate security log review to detect intrusion. A fuzzy expert system is an expert system that uses fuzzy membership functions and rules, instead of Boolean logic, to reason about data. Thus, fuzzy variables can have an approximate range of values instead of the binary True or False used in conventional expert systems. An example of this is an expert system that has rules of the form "If w is low and x is high then y is intermediate," where w and x are input variables and y is the output variable.

You have discovered that your organization's file server has been overwhelmed by UDP broadcast packets, resulting in a server crash. Which attack has occurred? smurf fraggle teardrop ping of death

Answer: fraggle Explanation: A fraggle attack has occurred. A fraggle attack chokes the processing resources of the victim host by flooding the network with spoofed UDP packets. Fraggle is a denial-of-service (DoS) attack that sends large amounts of spoofed broadcast UDP packets to IP broadcast addresses. This results in the target host being flooded with echo replies from the entire network, causing the system to either freeze or crash. A smurf attack is similar to a fraggle attack, but it spoofs the source IP address in an ICMP ECHO broadcast packet instead of in UDP packets. Smurf is a DoS attack that uses spoofed broadcast ping messages to flood a target system. In such an attack, the attacker sends a large amount of ICMP echo packets with spoofed source IP address similar to that of the target host to IP broadcast addresses. This results in the target host being flooded with echo replies from the entire network, causing the system to freeze or crash. Other examples of DoS attacks are SYN Flood, Bonk, and Ping of death attacks. In a teardrop attack, the attacker uses a series of IP fragmented packets, causing the system to either freeze or crash while the packets are being reassembled by the victim host. A teardrop attack is primarily based on the fragmentation implementation of IP. To reassemble the fragments in the original packet at the destination, the host checks the incoming packets to ensure that they belong to the same original packet. The packets are malformed. Therefore, the process of reassembling the packets causes the system to either freeze or crash. A ping of death is another type of DoS attack that involves flooding the target computer with oversized packets,exceeding the acceptable size during the process of reassembly and causing the target computer to either freeze or crash. Other denial of service attacks are smurf and fraggle.

What is used in evolutionary computing? characteristics of living organisms genetic algorithms knowledge from an expert mathematical or computational models

Answer: genetic algorithms Explanation: Genetic algorithms are used in evolutionary computing. Evolutionary computing is a type of artificial intelligence. Biological computing uses the characteristics of living organisms. Knowledge-based or expert systems use knowledge from an expert. Artificial neural networks (ANNs) use mathematical or computational models.

You have implemented a new network for a customer. Management has requested that you implement anti-virus software that is capable of detecting all types of malicious code, including unknown malware. Which type of anti-virus software should you implement? signature-based detection heuristic detection immunization behavior blocking

Answer: heuristic detection Explanation: You should implement heuristic detection anti-virus software. This type of anti-virus software is capable of detecting all types of malicious code, including unknown malware. A signature-based detection anti-virus software detects viruses based on the virus signatures located in its database. If the virus signature is not in the database, the virus will not be detected, meaning this type of anti-virus software cannot detect unknown malware. An immunization antivirus software attaches code to files or applications to make it appear as if the file or application was already infected. An immunizer is virus-specific, meaning that a new immunizer is needed for every virus. This method is considered obsolete. A behavior-blocking anti-virus software examines what is occurring on a system, looking for suspicious activity. If a potential malware is detected, the software is terminated. Heuristic detection and behavior blocking are considered proactive. Signature-based detection cannot detect new malware.

During a recent security audit, you discover that a few users have been redirected to a fake Web site while browsing the Internet. Which type of attack has occurred? land attack hyperlink spoofing ICMP packet spoofing network address hijacking

Answer: hyperlink spoofing Explanation: Hyperlink spoofing, also referred to as Web spoofing, has occurred. Hyperlink spoofing is used by an attacker to persuade the Internet browser to connect to a fake server that appears as a valid session. The primary purpose of hyperlink spoofing is to gain access to confidential information, such as PIN numbers, credit card numbers, and bank details of users. Hyperlink spoofing takes advantage of people using hyperlinks instead of DNS addresses. In most scenarios, the DNS addresses are not visible, and the user is redirected to another fake Web site after clicking a hyperlink. A land attack involves sending a spoofed TCP SYN packet with the target host's IP address and an open port acting both as a source and a destination to the target host on an open port. The land attack causes the system to either freeze or crash because the machine continuously replies to itself. ICMP packet spoofing is used by a smurf attack to conduct a denial-of-service (DoS) attack. A smurf is a DoS attack that uses spoofed broadcast ping messages to flood a target host. In such an attack, the attacker sends a large amount of ICMP echo packets with spoofed source IP address similar to that of the target host to IP broadcast addresses. This results in the target host being flooded with echo replies from the entire network. This also causes the system to either freeze or crash. Network address hijacking allows the attacker to reroute data traffic from a network device to a personal computer. Network address hijacking, which is also referred to as session hijacking, enables an attacker to capture and analyze the data addressed to a target system. The attacker can gain access to critical resources and user credentials, such as passwords, and unauthorized access to the critical systems of an organization. Session hijacking involves taking control of an existing connection after the user has successfully created an authenticated session.

You have been tasked with the development of a new application for your organization. You are engaged in the project initiation phase. Which activity should you implement during this phase? certification and accreditation defining formal functional baseline functionality and performance tests identification of threats and vulnerabilities

Answer: identification of threats and vulnerabilities Explanation: Identification of threats and vulnerabilities takes place during the project initiation phase of an application development life cycle. The project initiation phase involves obtaining management approval and the performing an initial risk analysis. Risk analysis identifies the potential threats and vulnerabilities based on the environment in which the product will perform data processing, the sensitivity of the data required, and the mechanisms that should be a part of the product as a countermeasure. Certification and accreditation are the processes implemented during the implementation of the product. Certification is the process of technically evaluating and reviewing a product to ensure that it meets the stated security requirements. Accreditation is a process that involves a formal acceptance of the product and its responsibility by management. Accreditation is the final step in authorizing a system for use in an environment. Defining formal functional baseline is included in the functional design analysis stage and not in the project initiation stage. A formal functional baseline can include security tasks and development, as well as testing plans to ensure that the security requirements are defined properly. Functionality and performance tests are conducted in an environment during software development to assess a product against a set of requirements. In a product development lifecycle, it is important that security be a part of the overall design and be integrated at each stage of product development. The security of an application is most effective and economical when the application is originally designed

Which term describes a module's ability to perform its job without using other modules? high coupling low coupling high cohesion low cohesion

Answer: low coupling Explanation: Low coupling describes a module's ability to perform its job without using other modules. High coupling would imply that a module must interact with other modules to perform its job. Cohesion reflects the different types of tasks that a module carries out. High cohesion means a module is easier to update and does not affect other modules. Low cohesion means a module carries out many tasks, making it harder to maintain and reuse.

Which platform-independent virus is written in Visual Basic (VB) and is capable of infecting operating systems? macro virus stealth virus self-garbling virus polymorphic virus

Answer: macro virus Explanation: Macro viruses are programs written in Word Basic, Visual Basic, or VBScript. Macro viruses are platform independent and pose a major threat because their underlying language is simple, and they are easy to develop. Macro viruses can infect operating systems and applications, but most often affect Microsoft Office files. They do not rely on the size of the packet. The ability of macro viruses to move from one operating system to the other allows them to spread more effectively than other types of viruses. Macro viruses are typically used with Microsoft Office products. A stealth virus hides the changes it makes to system files and boot records, making it difficult for antivirus software to detect its presence. A stealth virus keeps a copy of a file before infecting it and presents the original copy to the monitoring software. The stealth virus modifies the actual file and makes it difficult to detect the presence of the virus. A self-garbling virus can hide itself from antivirus software by manipulating its own code. When a self-garbling virus spreads, it jumbles and garbles its own code to prevent the antivirus software from detecting its presence. A small part of the virus code later decodes the jumbled part to obtain the rest of the virus code to infect the system. The ability of the self-garbling virus to format its own code makes it difficult for an antivirus to detect its presence. A polymorphic virus produces different operational copies of itself to evade detection by the antivirus software. There are usually multiple operational copies to ensure that in the event of an antivirus detection, only few copies are caught. A polymorphic virus is also capable of implementing encryption routines that will require different decryption routines to avoid detection. Macro viruses written in Visual Basic for Applications almost exclusively affect operating systems.

In object-oriented programming (OOP), what defines the functions that an object can carry out? message method class attribute

Answer: method Explanation: In object-oriented programming (OOP), the method defines the functions that an object can carry out. In OOP, each object belongs to a class. Each class contains several attributes. Each object within a class can take on the attributes of the class to which it belongs. Objects communicate with each other using messages. These messages tell objects to carry out certain operations. In an object-oriented system, the situation wherein objects with a common name respond differently to a common set of operations is called polymorphism. Inheritance occurs when all the methods of one class are passed on to a subclass. Smalltalk, Simula 67, and C++ are object-oriented languages. The OOP phase of the object-oriented software development life cycle is described as emphasizing the employment of objects and methods, rather than types or transformations as in other software approaches

Management is concerned that attackers will attempt to access information in the database. They have asked you to implement database protection using bogus data in hopes that the bogus data will mislead attackers. Which technique is being requested? partitioning cell suppression trusted front-end noise and perturbation

Answer: noise and perturbation Explanation: The noise and perturbation technique is being requested. This technique involves inserting randomized bogus information along with valid records of the database to mislead attackers and protect database confidentiality and integrity. This alters the data but allow users to access relevant information from the database. This technique also creates enough confusion to prevent the attacker from telling the different between valid and invalid information. Partitioning is not being requested. Partitioning is another protection technique for database security. Partitioning involves splitting the database into many parts and making it difficult for an intruder to collect and combine confidential information and deduce relevant facts. Cell suppression is not being requested. Cell suppression is the technique used to protect confidential information stored in the databases by hiding the database cells that can be used to disclose confidential information. A trusted front-end is not being requested. A trusted front-end refers to providing security to the database by incorporating security features into the functionality of the front-end client software that is used to issue instructions to the back-end server by using a structured query language. The trusted front-end client software acts as an interface to the back-end database system and provides the resultant output based on the input instructions issued by the user.

After the completion of a software development project, management decides to reassign physical resources, after first ensuring that there is no residual data left on the medium. Which term is used to describe this practice? metadata object reuse polymorphism dynamic data exchange

Answer: object reuse Explanation: Object reuse refers to the allocation or reallocation of system resources after ensuring that there is no residual data left on the medium. Object reuse implies that all the confidential data is removed from the storage media to avoid disclosure of residual data. If a system allows simultaneous execution of multiple objects for different users, you should ensure that there is no disclosure of residual information. Object reuse implies that all the sensitive data should be removed from the memory location or from storage media before another subject can access the object. Object reuse involves reallocating storage space and ensuring no residual data remains that can be used for malicious purposes. Metadata provides an insight into obscure data relationships. Metadata is the result of a new correlation between data components based on user instructions. Metadata is extracted from data mining techniques. Polymorphism refers to an object oriented programming (OOP) concept and implies that different objects can provide different output based on the same input. This is achieved due to the difference in the functional properties of objects where each object performs a specific sub task. The Dynamic Data Exchange (DDE) mechanism enables direct communication between two applications by using interprocess communications (IPC). Based on the client/server model, DDE allows two programs to directly exchange commands between each other. The source of the data is referred to as the server, and the system accessing the data is referred to as the client.

You have configured auditing for several security events on your Windows Server 2003 network. The Event Viewer logs are backed up on a daily basis. You configure the following settings for the Security log: - The Maximum event log size setting is set to 70,400 KB. - The Audit: Shut down system immediately if unable to log security events setting is enabled. - The Do not overwrite events setting is enabled. A few weeks later, the computer mysteriously shuts down. You discover that the Security event log settings are causing the problem. What could you do? (Each answer represents a unique possible solution.) a. Configure automatic log rotation. b. Disable the Audit: Shut down system immediately if unable to log security events setting. c. Enable the Overwrite events as needed setting. d. Decrease the size of the Security log. option a option b option c option d option a, b, or c option b, c, or d

Answer: option a, b, or c Explanation: To prevent the computer from shutting down due to the Security event log settings, you could: - Configure automatic log rotation. - Disable the Audit: Shut down system immediately if unable to log security events setting. - Enable the Overwrite events as needed setting. Any of these three steps will allow the Security log to continue recording security events. The problem is caused because the Security log has reached the 70,400 KB limit and overwriting events is not allowed. Another option would be to increase the size of the Security log. However, increasing the size could perhaps simply delay this problem in the future. You should not decrease the size of the Security log. Decreasing the size of the Security log will allow the problem to continue.

Which extensions are used for naming batch files in a Microsoft environment? a. bat b. cmd c. dll d. exe option a option b option c option d options a and b only options b and c only options c and d only

Answer: options a and b only Explanation: The .bat and .cmd extensions are used for naming batch files in a Microsoft environment. The .dll file extension indicates a dynamic link library file. These are generally used in device drivers and for other configuration purposes. The .exe file extension indicates an executable file. Executable files are used to start programs and applications. Batch files are very similar to script files in the Unix environment. Scripts and batch files are created to decrease administrator workload. The files contain the commands to perform certain tasks. Common usage of these file types include file manipulation, text and report printing, and program execution. A batch file or script contains all the commands needed to execute and complete the tasks. It reduces administrative effort because the administrator simply starts the batch file, instead of having to execute each of the commands within the batch file separately. Batch files and scripts may contain login credentials. For this reason, they should be stored in a protected area.

You are hired by an organization that uses context-dependent access control for its databases. Upon which factor is this access control based? the sensitivity of the data the role of the user the state and sequence of the request the ACL

Answer: the state and sequence of the request Explanation: Context-dependent access control is based on the state and sequence of the request. Content-dependent access control is based on the sensitivity of the data. Content-dependent access control usually reviews the access control list (ACL) to determine whether the user is allowed to access data. Role-based access control (RBAC) is based on the role of the user.

During a software development project, you are concerned that the period progress of the project be monitored appropriately. Which techniques can be used? a. Gantt charts b. Unit testing c. Delphi technique d. Program Evaluation Review Technique charts e. Prototype Evaluation Review Technique charts option a option b option c option d option e options a and b only options c and d only options a and d only options c and e only

Answer: options a and d only Explanation: Periodical progress of a project can be monitored by using Gantt charts and the Program Evaluation Review Technique (PERT) charts. Gantt charts are bar charts that represent the progress of tasks and activities over a period of time. Gantt charts depict the timing and the interdependencies between the tasks. Gantt charts are considered a project management tool to represent the scheduling of tasks and activities of a project, the different phases of the project, and their respective progress. Gantt charts serve as an industry standard. A PERT chart is a project management model invented by the United States Department of Defense. PERT is a method used for analyzing the tasks involved in completing a given project and the time required to complete each task. PERT can also be used to determine the minimum time required to complete the total project. Unit testing refers to the process in which the software code is debugged by a developer before it is submitted to the quality assurance team for further testing. The Delphi technique is used to ensure that each member in a group decision-making process provides an honest opinion on the subject matter in question. Group members are asked to provide their opinion on a piece of paper in confidence. All these papers are collected, and a final decision is taken based on the majority. Delphi technique is generally used either during the risk assessment process or to estimate the cost of a software development project. A prototype is a model or a blueprint of the product and is developed according to the requirements of customers. There is no process known as the Prototype Evaluation Review Technique charts. Cost-estimating techniques include the Delphi technique, expert judgment, and function points.

You are developing a new database for your organization. The database will be used to track warehouse inventory. You need to ensure that each inventory item is uniquely identified in the database tables. Which key or keys should you use? a. tuple b. foreign c. primary d. attribute e. cell option a option b option c option d option e options a and b only options b and c only options a and d only

Answer: options b and c only Explanation: You should use primary and foreign keys. Primary and foreign keys uniquely identify a record in a database. A primary key uniquely identifies a row in a table. When a user requests a resource access, the database tracks the information by using its unique primary key. A primary key is selected from a set of candidate keys. A foreign key refers to a value that exists in an attribute of a table and matches the value of the primary key on another table. The foreign key value might not be the primary key value in its own table, but the foreign key value must match the primary key value on another table. Rows and columns in a relational database are referred to as tuples and attributes, respectively. The rows and columns form a table or a record containing information. A relational database model uses attributes and tuples for storing and organizing the information that can be extracted by users to meet their job responsibilities in the database. The two dimensional table in which data is stored is referred to as base relation. In a database, the total number of attributes in a table is referred to as degree, and the number of tuples is referred to as cardinality. A cell is in intersection of a row and column. Some cells can be primary or foreign keys, but not all cells are. A candidate key uniquely identifies rows and can be a combination of more than one attributes to determine uniqueness. A base relation may have several candidate keys. The primary key is defined as a candidate key. A candidate key is important because it identifies individual tuples in a relation. While a relation actually exists within the database, a view is a virtual relation that is not stored in the database.

Which type of validation controls can be placed on the client side? access controls pre-validation controls post-validation controls input validation

Answer: pre-validation controls Pre-validation controls can be placed on the client side. Parameter validation occurs when the parameter values entered into the application are validated before they are submitted to the application to ensure that the values lie within the server's defined limits. Pre-validation controls are input controls that are implemented prior to submission to the application. These controls can occur on the client, the server, or both. Access controls are not validation controls. Access controls are the controls of limiting access to resources to authorized users. Post-validation controls occur when an application's output is validated to be within certain constraints. Input validation is not a type of validation control. Input validation verifies the values that the user's input. Parameter validation validates parameters that are defined within the application.

While developing a new system, the IT department considers the system's security requirements, such as encryption. Which phase of the system development life cycle is occurring? project initiation system development system design specification operations and maintenance

Answer: project initiation Explanation: The project initiation phase of the system development life cycle (SDLC) involves consideration of security requirements, such as encryption. Security requirements are considered a part of software risk analysis during the project initiation phase of the SDLC. The SDLC identifies the relevant threats and vulnerabilities based on the environment in which the product will perform data processing, the sensitivity of the data required, and the countermeasures that should be a part of the product. It is important that the SDLC methodology be adequate to meet the requirements of the business and the users. The system development phase of an application development life cycle includes coding and scripting of software applications. The system development stage ensures that the program instructions are written according to the defined security and functionality requirements of the product. The programmers build security mechanisms, such as audit trails and access control, into the software according to the predefined security assessments and the requirements of the application. The system design specification phase focuses on providing details on which kind of security mechanism will be a part of the software product. The system design specification phase also includes conducting a detailed design review and developing a plan for validation, verification, and testing. The organization developing the application will review the product specifications together with the customer to ensure that the security requirements are clearly stated and understood and that the functionality features are embedded in the product as discussed earlier. The involvement of security analysts at this phase ensures maximum benefit to the organization. This also enables you to understand the security requirements and features of the product and to report existing loopholes. The implementation stage of an application development life cycle involves use of an application on production systems in the organization. Implementation implies use of the software in the company to meet business requirements. This is the stage where software can be analyzed to see if it meets the business requirements. Implementation stage also involves certification and accreditation process. Certification and accreditation are the processes implemented during the implementation of the product. Certification is the process of technically evaluating and reviewing a product to ensure that it meets the security requirements. Accreditation is a process that involves a formal acceptance of the product and its responsibility by the management. In the National Information Assurance Certification and Accreditation Process (NIACAP), accreditation evaluates an application or system that is distributed to a number of different locations. NIACAP establishes the minimum national standards for certifying and accrediting national security systems. The four phases of NIACAP include definition, verification, validation, and post accreditation. The three types of NIACAP accreditation are site, type, and system. The operations and maintenance phase of a SDLC identifies and addresses problems related to providing support to the customer after the implementation of the product, patching up vulnerabilities and resolving bugs, and authenticating users and processes to ensure appropriate access control decisions. The operations and maintenance phase of software development lifecycle involves use of an operations manual, which includes the method of operation of the application and the steps required for maintenance. The maintenance phase controls consist of request control, change control, and release control. Disposal of software is the final stage of a software development life cycle. Disposal implies that the software would no longer be used for business requirements due to availability of an upgraded version or release of a new application that meets the business requirements more efficiently through new features and services. It is important that critical applications be disposed of in a secure manner to maintain data confidentiality, integrity, and availability for continuous business operations. The simplistic model of software life cycle development assumes that each step can be completed and finalized without any effect from the later stages that might require rework. In a system life cycle, information security controls should be part of the feasibility phase.

You are developing a new software application for a customer. The customer is currently defining the application requirements. Which process is being completed? sampling prototyping abstraction interpretation

Answer: prototyping Explanation: A prototype or a blueprint of the product is developed on the basis of customer requirements. Prototyping is the process of putting together a working model, referred to as a prototype, to test various aspects of a software design, to illustrate ideas or features, and to gather feedback in accordance with customer requirements. A prototype enables the development team and the customer to move in the right direction. Prototyping can provide significant time and cost savings because it will involve fewer changes later in the development stage. A product is developed in modules. Therefore, prototyping provides scalability. Complex applications can be further subdivided into multiple parts and represented by different prototypes. The software design and development tasks can be assigned to multiple teams. A sample is a generic term that identifies a portion that is a representative of a whole. Interpreters are used to execute the program codes by translating one command at a time. Abstraction is an object-oriented programming (OOP) concept that refers to hiding unnecessary information to highlight important information or properties for analysis. Abstraction involves focusing on conceptual aspects and properties of an application to understand the information flow. Abstraction involves hiding small, redundant pieces of information to provide a broader picture. Interpreters are used to execute the program codes by translating one command at a time.

Recently, a change request for an application was submitted and approved. After the change to the application is made, you test the application for functionality and performance. Which type of testing are you engaged in? regression testing unit testing integration testing acceptance testing

Answer: regression testing Explanation: You are engaged in regression testing. Regression testing involves testing the functionality, performance, and protection of an application after a change takes place. Unit testing is initial testing that occurs to ensure that individual application components are validated to data structure, logic, and boundary conditions. Integration testing verifies that application components work together as outlined in the design specifications. Acceptance testing ensures that the code meets customer requirements.

Which database model uses tuples and attributes for storing and organizing information? relational model hierarchical model object-oriented model distributed data model

Answer: relational model Explanation: A relational database model uses attributes and tuples to store and organize information that can be extracted by users to meet their job responsibilities in the database. In a relational database, columns and rows are referred to as attributes and tuples, respectively. The two dimensional table in which data is stored is referred to as a base relation table. In a database, the total number of columns or attributes in a table is referred to as degree, and the number of rows or tuples is referred to as cardinality. Attributes and tuples can have some specific allowable values in a database, and the total set of allowable values that can be assigned to attributes in a database is referred to as a domain of a relation. The domain of a relation specifies the interrelationship of the data components. In a hierarchical database, the data is organized into a logical tree structure instead of in rows and columns. Records and fields are related to each other in a parent-child tree structure. A hierarchical database tree structure has branches, and each branch has many leaves. In a hierarchical database, the leaves are the data fields, and the data is accessed through well-defined access paths. Hierarchical databases are used if either one or many relationships exist. An object-oriented database is used to manage multiple types of data, such as images, audio, video, and documents. The different types of data, referred to as objects, are used to create dynamic data components. The primary difference between the relational and object-oriented database models is that in an object-oriented database, the objects can be dynamically created according to the requirements and instructions executed. In a relational database, the application uses procedures to extract the data. A distributed database model refers to multiple databases that are situated at remote locations and are logically connected. In a distributed database model, the transition from one database to another is kept transparent to the users. Logically connected databases appear as a single database to the users. The distributed database model allows different databases situated at remote locations to be managed individually by different database administrators. A distributed database has scalability features, such as load balancing and fault tolerance.

Which statement correctly defines spamming attacks? repeatedly sending identical e-mails to a specific address using ICMP oversized echo messages to flood the target computer sending spoofed packets with the same source and destination address sending multiple spoofed packets with the SYN flag set to the target host on an open port

Answer: repeatedly sending identical e-mails to a specific address Explanation: A spamming attack involves flooding an e-mail server or specific e-mail addresses repeatedly with identical unwanted e-mails. Spamming is the process of using an electronic communications medium, such as e-mail, to send unsolicited messages to users in bulk. Packet filtering routers typically do not prove helpful in such attacks because packet filtering routers do not examine the data portion of the packet. E-mail filter programs are now being embedded either in the e-mail client or in the server. E-mail filter programs can be configured to protect from spamming attacks to a great extent. A ping of death is a type of DoS attack that involves flooding target computers with oversized packets and exceeding the acceptable size during the process of reassembly. This causes the target computer to either freeze or crash. Other DoS attacks, named smurf and fraggle, deny access to legitimate users by causing a system to either freeze or crash. In a SYN flood attack, the attacker floods the target with the spoofed IP packets, causing it to either freeze or crash. The Transmission Control Protocol (TCP) uses the synchronize (SYN) and acknowledgment (ACK) packets to establish communication between two host computers. The exchange of the SYN, SYN-ACK, and ACK packets between two host computers is referred to as handshaking. Attackers flood the target computers with a series of SYN packets to which the target host computer replies. The target host computer then allocates resources to establish a connection. The IP address is spoofed. Therefore, the target host computer never receives a valid response in the form of ACK packets from the attacking computer. When the target computer receives many SYN packets, it runs out of resources to establish a connection with the legitimate users and becomes unreachable for the processing of valid requests. A land attack involves sending multiple spoofed TCP SYN packets with the target host's IP address and an open port as both the source and the destination to the target host on an open port. The land attack causes the system to either freeze or crash because the computer replies to itself.

Which type of virus is specifically designed to infect programs as they are loaded into memory? boot sector replication companion nonresident resident

Answer: resident Explanation: A resident virus is specifically designed to infect programs as they are loaded into memory. A companion virus is designed to take advantage of the extension search order of an operating system. A nonresident virus is part of an executable program file on a disk that is designed to infect other programs when the infected program file is started. A boot sector replicating virus is written to the boot sector of a hard disk on a computer and is loaded into memory each time a computer is started.

An attacker is in the process of making an unauthorized change to some data in your database. You need to cancel any database changes from the transaction and return the database to its previous state. Which database operation should you use? commit rollback savepoint checkpoint

Answer: rollback Explanation: You should use a rollback operation. A rollback operation cancels any database changes from the current transaction and returns the database to its previous state. It prevents a transaction from updating the database with partial or corrupt data. A commit operation finalizes any database changes from the current transaction, making the changes available to other users. A savepoint operation creates a logged point to which the database can be restored. It allows data to be restored to a certain point in time. A checkpoint operation saves data that is stored in memory to the database. It allows the memory to be cleared. When a database detects an error, a checkpoint enables it to start processing at a designated place.

You need to ensure that data types and rules are enforced in the database. Which type of integrity should be enforced? entity integrity referential integrity semantic integrity cell suppression

Answer: semantic integrity Explanation: Semantic integrity should be enforced. Semantic integrity ensures that data types and rules are enforced. It includes checking data types, values, data constraints, and uniqueness rules. Semantic integrity protects the data by ensuring that data values follow all the rules. Entity integrity ensures that each row is identified by a unique primary key. Referential integrity ensures that each foreign key references a primary key that actually exists. Cell suppression is not a type of integrity. It is a technique used to hide certain cells.

Which pair of processes should be separated from each other to manage the stability of the test environment? testing and validity validity and security testing and development validity and production

Answer: testing and development Explanation: The testing and development processes should be separated from each other to manage the stability of the test environment. Separating the test environment and the development environment is an example of separation of duties. The responsibilities of the test and development staff in the software lifecycle development process should be clearly distinguished. For example, debugging is performed by the programmer while coding the instructions. This process is known as unit testing. After the software program is submitted, it is again verified by the quality assurance team by using formal procedures and practices. It is recommended that a software programmer develop the software, test it, and submit it to production. Separation of duties ensures that the quality assurance team conducts checks by using formal procedures. Software should be tested thoroughly before it is sent to the production environment. This will ensure that the software does not adversely affect the business operations of the organization. All the other options are invalid in the contexts of software development life cycle and separation of duties.

You have implemented the three databases that your organization uses to ensure that an entire transaction must be executed to ensure data integrity. If a portion of a transaction cannot complete, the entire transaction is not performed. Which database security mechanism are you using? concurrency savepoints two-phase commit aggregation

Answer: two-phase commit Explanation: You are using the two-phase commit. A two-phase commit ensures that the entire transaction is executed to ensure data integrity. If a portion of a transaction cannot complete, the entire transaction is not performed. Concurrency ensures that the most up-to-date information is shown to database users. To ensure concurrency, locks are often implemented at the page, table, row, or field level to ensure updates happen one at a time. Savepoints ensure that the database can return to a previous state if a system failure occurs. A savepoint will usually save part of the data update. Savepoints are a security mechanism to ensure data integrity. Aggregation occurs when a user can take information for different sources and combines them to accurately predict that the user does not have the clearance to view directly.

During the application development life cycle, your team performs testing to debug the code instructions. Which software testing method is the team using? unit testing vertical testing blue-box testing perpendicular testing

Answer: unit testing Explanation: A part of the application development lifecycle, unit testing is an internal testing performed to debug the code instructions. Unit testing is performed by the developer rather than by the quality assurance team. After the code is developed, it is sent to the quality assurance team for evaluation and detection of anomalies, functional errors, and security loopholes. Unit testing can use test design methods, such as white box and black box. Keep the unit testing guidelines in mind: - The test data is part of the specification. - Correct test output results should be developed and known beforehand. - Testing should check for out-of-range values and other bounds conditions. - Perpendicular testing, blue-box testing, and vertical testing are not valid categories of software test approaches in an application development life cycle. Black-box testing does not explicitly use the knowledge of the internal structure. The black-box test design typically focuses on testing functional requirements. Black-box testing implies that the selection of test data and the interpretation of test results are performed on the basis of the functional properties of software rather than its internal structure. The white-box technique focuses only on testing the design and internal logical structure of the software product rather than its functionality. In general, the software testing should be planned, and the results of the tests should be documented throughout the software development life cycle as permanent records.

What is the definition of polymorphism? the ability to suppress superfluous details so that the important properties can be examined when different objects respond to the same command or input in different ways a representation of a real-world problem the process of categorizing objects that will be appropriate for a solution

Answer: when different objects respond to the same command or input in different ways Explanation: Polymorphism occurs when different objects respond to the same command or input in different ways. Abstraction is the ability to suppress superfluous details so that the important properties can be examined. Object-oriented design (OOD) is a representation of a real-world problem. Object-oriented analysis (OOA) is the process of categorizing objects that will be appropriate for a solution.

Match the descriptions on the left with the application attacks on the right. Buffer overflow - Cross-site scripting (XSS) - Session hijacking - Zero-day attack - - an attack that occurs when user validation information is stolen and used to establish a connection - an attack that occurs on the day when an application vulnerability has been discovered - an attack that allows code injection by hackers into the Web pages viewed by other users - an attack that occurs when an application receives more data than it is programmed to accept

Explanation: The application attacks should be matched with the descriptions in the following manner: Buffer overflow - an attack that occurs when an application receives more data than it is programmed to accept Cross-site scripting (XSS) - an attack that allows code injection by hackers into the Web pages viewed by other users Session hijacking - an attack that occurs when user validation information is stolen and used to establish a connection Zero-day attack - an attack that occurs on the day when an application vulnerability has been discovered

What is the purpose of the method in object-oriented programming (OOP)?

It defines the functions that an object can carry out.

What does verification provide during the software development life cycle (SDLC)?

It determines if the software meets its design specifications.

What does validation provide during the software development life cycle (SDLC)?

It determines if the software meets the needs for which it was created.

What occurs in inference attacks?

The subject deduces the complete information about an object from the bits of information collected through aggregation.

Which model describes the principles, procedures, and practices that should be followed by an organization in a software development life cycle and defines five maturity levels?

capability maturity model (CMM)

What does the acronym DDE denote?

dynamic data exchange

Which type of integrity ensures that each row is identified by a unique primary key?

entity integrity


Related study sets

HFT: 3540 Guest Service Management Final Review

View Set

Circuitory system aids to understandment

View Set

58. Peritoneum - parietal, visceral, peritoneal cavity, omen bursa

View Set