CSSLP Sample Exam (2017)

Ace your homework & exams now with Quizwiz!

QUESTION 191 The BEST response for an organization when accepting a software product that has known issues is to create which of the following? A. Risk exception B. Service Level Agreement (SLA) C. Disaster Recovery Plan (DRP) D. Terms of usage agreement

Correct Answer: A Explanation/Reference: Exception Approval and Risk Acceptance Process by Lori Kleckner This installment of our Next Generation Security blog discusses areas in the emergency call center environment which are not compliant with security policies, laws or regulations governing the environment, specifically the network. Risks exists, they are unavoidable. The actions taken to minimize negative impact, demonstrates due care to protect resources including devices, data and the ability to provide emergency services to the public. The National Emergency Number Association (NENA) Security for Next-Generation 9-1- 1 Standard (NG-SEC) offers a structure for policies supporting the exception approval and risk acceptance process. Risk Exception and Acceptance Risk exception begins with identifying an exposure. This is recognizing an area where you are not compliant with policies, laws or regulations in place. Your resources are at risk for exposure to malicious activity or for penalties due to non-compliance. Risk acceptance is acknowledging and agreeing to the consequences of exposure. You know your resources are exposed and accept the damage that could occur due to that exposure. Risk mitigation is the action in place to minimize the impact of the consequences. The Process Risk assessment: Conduct an assessment (we of course recommend the NENA NG-SEC Standard) to compare your environment to a standard developed for emergency call centers. From this assessment weigh the following factors to determine your level of risk: Potential severity - how bad could it be Impact of the risk - what will happen, how much damage will it cause Likelihood - will it happen, how often CT blog image 1Risk analysis: Evaluate the feasibility and cost of mitigation strategies relative to the potential cost impact. Consider: How can we correct this Will this cost more before or after it occurs Can we correct this now or ever Risk response: There are four counter measures to the existence of a risk: Eliminate - correct the situation Reduce - mitigate the likelihood of occurrence or minimize the impact Transfer - move the responsibility or liability to another source, e.g. third-party outsourcing Accept - with no other options available, accept the consequences Risk acceptance and approval: When risk cannot be eliminated, reduced to an acceptable level or transferred to another source, it must be accepted and approval from leadership must be obtained. Gaining approval from leadership provides awareness at the top level of the organization and engages allies to further support risk mitigation. It is understood that removing a risk or reducing a risk to an acceptable level is not always possible. Reasons are often: Funding - not a budget priority at this time Technical - risk mitigation or removal is not supported by current devices Resources - there are simply not enough personnel available to support mitigation or removal Findings from assessments performed by L.R. Kimball reveal that an exception approval and risk acceptance process is not common among emergency call centers. Having a formal process in place not only indentifies areas of concern but provides a plan to decrease, eliminate or at least prepare for the consequences of risks. The process also provides insight for future purchases that may aide in further reducing risk. Summary We have touched on the high-level points for exception approval and risk acceptance. Determining your level of risk requires an assessment and knowledge of security policies, laws and regulations governing your environment. The NG-SEC standard provides an in-depth process for risk analysis, mitigation and acceptance. Risks exist, they are unavoidable. By planning, you will create awareness throughout your organization of the risks present and increase your ability to minimize the consequences. The next installment of our NG-SEC series will discuss Incident Response and Planning. This will complete our review of the 10 sections of the NG-SEC standard. We will follow the final section with an overview and summary of NG-SEC series.

QUESTION 175 Which of the following BEST makes a security metric credible? A. Consistently measured B. Qualitative C. Ratings-oriented D. Audit-based

Correct Answer: A Explanation/Reference: See http://sunnyday.mit.edu/16.355/metrics.pdf (29 pages) Abstract—Software metrics have been proposed as instruments, not only to guide individual developers in their coding tasks, but also to obtain high-level quality indicators for entire software systems. Such system-level indicators are intended to enable meaningful comparisons among systems or to serve as triggers for a deeper analysis. Common methods for aggregation range from simple mathematical operations (e.g. addition and central tendency) to more complex methodologies such as distribution fitting, wealth inequality metrics (e.g. Gini coefficient and Theil Index) and custom formulae. However, these methodologies provide little guidance for interpreting the aggregated results or to trace back to individual measurements. To resolve such limitations, a two stage rating approach has been proposed where (i) measurement values are compared to thresholds to summarize them into risk profiles, and (ii) risk profiles are mapped to ratings. In this paper, we extend our approach for deriving metric thresholds from benchmark data into a methodology for benchmark-based calibration of two-stage aggregation of metrics into ratings. We explain the core algorithm of the methodology and we demonstrate its application to various metrics of the SIG quality model, using a benchmark of 100 software systems. The IEEE defines reliability as "The ability of a system or component to perform its required functions under stated conditions for a specified period of time." To most project and software development managers, reliability is equated to correctness, that is, they look to testing and the number of "bugs" found and fixed. While finding and fixing bugs discovered in testing is necessary to assure reliability, a better way is to develop a robust, high quality product through all of the stages of the software lifecycle. That is, the reliability of the delivered code is related to the quality of all of the processes and products of software development; the requirements documentation, the code, test plans, and testing. Software reliability is not as well defined as hardware reliability, but the Software Assurance Technology Center (SATC) at NASA is striving to identify and apply metrics to software products that promote and assess reliability. This paper discusses how NASA projects, in conjunction with the SATC, are applying software metrics to improve the quality and reliability of software products. Reliability is a by-product of quality, and software quality can be measured. We will demonstrate how these quality metrics assist in the evaluation of software reliability. We conclude with a brief discussion of the metrics being applied by the SATC to evaluate the reliability . I. Definitions Software cannot be seen nor touched, but it is essential to the successful use of computers. It is necessary that the reliability of software should be measured and evaluated, as it is in hardware. IEEE 610.12-1990 defines reliability as "The ability of a system or component to perform its required functions under stated conditions for a specified period of time." IEEE 982.1-1988 defines Software Reliability Management as "The process of optimizing the reliability of software through a program that emphasizes software error prevention, fault detection and removal, and the use of measurements to maximize reliability in light of project constraints such as resources, schedule and performance." Using these definitions, software reliability is comprised of three activities: Error prevention Fault detection and removal Measurements to maximize reliability, specifically measures that support the first two activities. There has been extensive work in measuring reliability using mean time between failure and mean time to failure.[1] Successful modeling has been done to predict error rates and reliability.[1,2,3] These activities address the first and third aspects of reliability, identifying and removing faults so that the software works as expected with the specified reliability. These measurements have been successfully applied to software as well as hardware. But in this paper, we would like to take a different approach to software reliability, one that addresses the second aspect of reliability, error prevention. II. Errors, Faults and Failures The terms errors, faults and failures are often used interchangeable, but do have different meanings. In software, an error is usually a programmer action or omission that results in a fault. A fault is a software defect that causes a failure, and a failure is the unacceptable departure of a program operation from program requirements. When measuring reliability, we are usually measuring only defects found and defects fixed.[4] If the objective is to fully measure reliability we need to address prevention as well as investigate the development starting in the requirements phase - what the programs are developed to. It is important to recognize that there is a difference between hardware failure rate and software failure rate. For hardware, as shown in Figure 1, when the component is first manufactured, the initial number of faults is high but then decreases as the faulty components are identified and removed or the components stabilize. The component then enters the useful life phase, where few, if any faults are found. As the component physically wears out, the fault rate starts to increase.[1] Software however, has a different fault or error identification rate. For software, the error rate is at the highest level at integration and test. As it is tested, errors are identified and removed. This removal continues at a slower rate during its operational use; the number of errors continually decreasing, assuming no new errors are introduced. Software does not have moving parts and does not physically wear out as hardware, but is does outlive its usefulness and becomes obsolete.[5] To increase the reliability by preventing software errors, the focus must be on comprehensive requirements and a comprehensive testing plan, ensuring all requirements are tested. Focus also must be on the maintainability of the software since there will be a "useful life" phase where sustaining engineering will be needed. Therefore, to prevent software errors, we must: Start with the requirements, ensuring the product developed is the one specified, that all requirements clearly and accurately specify the final product functionality. Ensure the code can easily support sustaining engineering without infusing additional errors. A comprehensive test program that verifies all functionality stated in the requirements is included. Reliability as a Quality Attribute There are many different models for software quality, but in almost all models, reliability is one of the criteria, attribute or characteristic that is incorporated. ISO 9126 [1991] defines six quality characteristics, one of which is reliability. IEEE Std 982.2-1988 states "A software reliability management program requires the establishment of a balanced set of user quality objectives, and identification of intermediate quality objectives that will assist in achieving the user quality objectives." [6] Since reliability is an attribute of quality, it can be concluded that software reliability depends on high quality software. Building high reliability software depends on the application of quality attributes at each phase of the development life cycle with the emphasis on error prevention, especially in the early life cycle phases. Metrics are needed at each development phase to measure applicable quality attributes. IEEE Std 982.2-1988 includes the diagram in Figure 2, indicating the relationship of reliability to the different life cycle phases.[7]

QUESTION 187 In the Agile development methodology, when are threat models of the system completed? A. By the first sprint and before any code is written B. During a dedicated security sprint C. Gradually in each sprint as each feature is developed D. After the last sprint when the entire application is stable

Correct Answer: A Explanation/Reference: Threat Modeling and Agile Development Practices Published: February 21, 2012 Author: Chas Jeffries, Security Architect, Microsoft Services Strong demand for rapid application delivery continues to drive today's fast-paced application development cycles. There are more and more web services, new cloud applications, and mobile applications. Just because your application needs to be developed rapidly, doesn't mean that you can't develop that application with privacy and security in mind. In a previous article, I wrote about how you can use the Microsoft Simplified Security Development Lifecycle (SDL) for web application development. In that article, I briefly discussed how threat modeling can be used to understand threats to your application and how to mitigate them. In this article, we are going to closely examine how to effectively perform threat modeling for projects that demand rapid development processes. Before we dive into the details on threat modeling, let's briefly review how threat modeling fits into the SDL. Threat Modeling and the SDL My colleague, Michael Howard, who is co-author of the book, The Security Development Lifecycle, likes to say that, "If you're only going to do one activity from the SDL, it should be threat modeling!" I couldn't agree more; hopefully, this article will demonstrate how easily threat modeling can fit into your agile development practices. Let's quickly review the basics of threat modeling before getting into how threat modeling integrates into agile development methodologies. Threat modeling overview Threat modeling is a fundamental activity of the SDL and the main software engineering artifact that results from following the SDL. A threat model is a diagram that encapsulates the various interactions of your envisioned application or service to include both internal and external factors. A fundamental concept in threat modeling is that of "trust" boundaries. These are the points of demarcation between parts of your application where threats are most likely to occur. The most obvious example of a trust boundary for web applications is the demarcation between the user's browser/computer and your application interface, which resides on a server somewhere on the Internet. Your application faces a myriad of potential and unanticipated threats beyond those you might expect from your trusted user or customer. Using trust boundaries in threat modeling simplifies the identification and classification of threats. What threat modeling is and what it isn't To understand what threat modeling is, it's helpful to also look at what it isn't. Those who are new to threat modeling often misunderstand its purpose or intent and often will argue that creating yet another system diagram is duplication of effort for the overall project. This is a heightened concern when teams are focused on realizing the speed and efficiencies of agile development methodologies. The goal of this article is to demonstrate why threat models are valuable and stand on their own as software engineering artifacts for your agile project.

QUESTION 190 Why would security design patterns be used in system implementation? A. To solve commonly occurring security problems B. To properly secure the code being created C. To extend common security requirements D. To address end-user input security processes

Correct Answer: A Explanation/Reference: Security patterns can be applied to achieve goals in the area of security. All of the classical design patterns have different instantiations to fulfill some information security goal: such as confidentiality, integrity, and availability. Additionally, one can create a new design pattern to specifically achieve some security goal. Available system patterns These are patterns that are concerned with the availability of the assets. The assets are either services or resources offered to users. Check pointed System pattern describes a design to use Replication (computer science) and recover when a component fails. Standby pattern has the goal to provide a fall back component able to resume the service of the failing component. Comparator-checked fault tolerant system pattern provides a way to monitor the failure free behavior of a component. Replicated system pattern describes a design of redundant components and a mean of load balancing and redirection in between to decrease the chance of non availability of the service. Error detection/correction pattern has the goal to deduce errors and possibly correct them to guarantee correct information exchange or storage. Protected system patterns This is a set of patterns concerned with the confidentiality and integrity of information by providing means to manage access and usage of the sensitive data. The protected system pattern provides some reference monitor or enclave that owns the resources and therefor must be bypassed to get access. The monitor enforces as the single point a policy. The GoF refers to it as "Protection Proxy". The policy pattern is an architecture to decouple the policy from the normal resource code. An authenticated user owns a security context (erg. a role) that is passed to the guard of resource. The guard checks inside the policy whether the context of this user and the rules match and provides or denies access to the resource. The authenticator pattern is also known as the Pluggable Authentication Modules or Java Authentication and Authorization Service (JAAS). Subject descriptor pattern Secure Communication is similar to Single sign-on, RBAC Security Context is a combination of the communication protection proxy, security context and subject descriptor pattern. Security Association is an extension of the secure communication pattern. Secure Proxy pattern can be used for defense in depth. Security patterns for Java EE, XML Web Services and Identity Management [6][edit] This is a set of security patterns evolved by Sun Java Center - Sun Microsystems engineers Ramesh Nagappan and Christopher Steel, which helps building end-to-end security into multi-tier Java EE enterprise applications, XML based Web services, enabling Identity management in Web applications including Single sign-on authentication, multi-factor authentication, and enabling Identity provisioning in Web-based applications. Authentication Enforcer pattern can be used to manage and delegate authentication processes Authorization Enforcer pattern can be used to manage and delegate authorization processes Intercepting Validator pattern helps performing security validation for input data from clients Secure Base Action pattern shows centralizing handling of security tasks in a base action class Secure Logger pattern can be used to log sensitive data and ensuring tamper-proof storage Secure Session Manager shows securely centralizing session information handling Web Agent Interceptor pattern shows how to use an interceptor mechanism to provide security for Web applications Obfuscated Transfer Object pattern shows how to protect data passed around in transfer objects and between application tiers Audit Interceptor pattern shows to capture security related events to support logging and auditing Message Inspector pattern shows verification and validation of XML message-level security mechanisms, such as XML Signature and XML Encryption in conjunction with a security token. Message Interceptor Gateway pattern shows a single entry point solution for centralization of security enforcement for incoming and outgoing XML Web Service messages. It helps to apply transport-level and message-level security mechanisms required for securely communicating with a Web services endpoint. Secure Message Router pattern facilitates secure XML communication with multiple partner endpoints that adopt message-level security. It acts as a security intermediary component that applies message-level security mechanisms to deliver messages to multiple recipients where the intended recipient would be able to access only the required portion of the message and remaining message fragments are made confidential. Single Sign-On (SSO) Delegator pattern describes how to construct a delegator agent for handling a legacy system for single sign-on (SSO). Assertion Builder pattern defines how an identity assertion (for example, authentication assertion or authorization assertion) can be built. Credential Synchroniser pattern describes how to securely synchronize credentials and principals across multiple applications using Identity provisioning

QUESTION 53 Prior to releasing a new software product, an application manager MUST have a completed and approved which of the following? A. Risk acceptance B. Test report C. Quality review report D. Deployment script

Correct Answer: B Explanation/Reference: A software test report is a document that records data obtained from an evaluation experiment in an organized manner, describes the environmental or operating conditions, and shows the comparison of test results with test objectives.

QUESTION 69 Web pages that display unvalidated and unencoded user input are MOST vulnerable to which of the following attacks? A. Structured Query Language (SQL) Injection B. Cross-site Scripting (XSS) C. Cross- Site Forgery (CSRF) D. Man-in-the-Middle (MITM)

Correct Answer: B Explanation/Reference: Cross-site scripting (XSS) is a type of computer security vulnerability typically found in web applications. XSS enables attackers to inject client-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypass access controls such as the same-origin policy.

QUESTION 189 Which of the following is the MOST thorough method of identifying backdoors created by insiders within an application? A. Black-box testing B. Code review C. Penetration testing D. Vulnerability scanning

Correct Answer: B Explanation/Reference: Malice Or Stupidity Or Inattention? Using Code Reviews To Find Backdoors WEDNESDAY, MARCH 2, 2016 AT 9:27AM The temptation to put a backdoor into a product is almost overwhelming. It's just so dang convenient. You can go into any office, any lab, any customer site and get your work done. No hassles with getting passwords or clearances. You can just solve problems. You can log into any machine and look at logs, probe the box, issue commands, and debug any problem. This is very attractive to programmers. I've been involved in several command line interfaces to embedded products and though the temptation to put in a backdoor has been great, I never did it, but I understand those who have. There's Another Source Of Backdoors: Infiltration By An Attacker. We've seen a number of backdoors hidden in code bases you would not expect. Juniper Networks found two backdoors in its firewalls. Here's Some Analysis of the Backdoored Backdoor. Here's more information to reaffirm your lack of faith in humanity: NSA Helped British Spies Find Security Holes In Juniper Firewalls. And here are a A Few Thoughts on Cryptographic Engineering. Juniper is not alone. Here's a backdoor in AMX AV equipment. A Secret SSH backdoor in Fortinet hardware found in more products. There were Backdoors Found in Barracuda Networks Gear. And The 12 biggest, baddest, boldest software backdoors of all time. Who knows how many backdoors are embedded in chips? Security backdoor found in China-made US military chip. And so on. By now we can pretty much assume backdoors are the rule, not the exception. Backdoors Are A Cheap Form Of Attack The problem is two fold: backdoors are very economical to insert and backdoors are almost impossible to test for. Backdoors are economical because all you need is a human asset on the inside to insert code into a code base or circuits into a netlist. Or perhaps an attacker can corrupt the tools that produce the code and the chips? Or perhaps an attacker can corrupt the manufacturer who is spinning your ASIC? You Can Never Know Who You Are Hiring Is that incredibly talented candidate someone who really wants to work for you? Or are they a plant? I know I've wondered this when hiring people. Isn't that person a little too perfect to want to work here? In most shops it would be trivially easy to insert malicious code into a code base. Employees are trusted. As they must be for a team of people to get quality work done. Trojan employees are not the only attack vector. Maybe one of your loyal longstanding employees has been compromised. Perhaps they are being blackmailed into inserting a backdoor? Yes, this is the way we have to think now. It sucks. Testing Only Tests What You Expect To Be There How would you know a backdoor has been installed? Testing usually only tests for positive assertions of facts. Does system Y do X? How would you test for a backdoor that you don't know is there? You might only know if the backdoor itself caused an externally visible signal. Perhaps it interferes with some other code? Perhaps it makes an image suspiciously large? Perhaps it draws too much power? Perhaps it makes something a tick slower than it should be and that causes an investigation that leads to discovery. Lots Of Eyeballs Are Not Enough Have you ever had a ponder of programmers all stand around a monitor looking for a bug in a short seemingly clear piece of code...and nobody sees the bug? The glibc remote hacking vulnerability existed since 2008. We now know for sure that a Bazaar full of eyes is not enough to ensure code quality. Simple quantity is not enough. We have to go for quality and that means code reviews. Yuck, code reviews, I know. Perspective Based Code Reviews When I worked on an embedded software system for a Five 9s core optical switch, code quality was of the highest importance. This thing could never fail. Really. So I implemented a code review system that required every line of code be reviewed by two or more developers before it was checked-in. It was a cool system. All automated and clean. This is an addition to a battery of systems tests, regression tests, unit tests, customer acceptance tests, etc. Quality is a defense-in-depth sort of situation. What's important is not just that code is reviewed, but how it's reviewed. The benefit of code reviews is undeniable. Yet, code reviews don't happen in most software development organizations. And for good reason. The way code reviews are typically handled uses an enormous amount of people and time resources. Even with system quality improvements developers may not feel the effort is worth the cost. It turns out you don't need a high overhead code review process to get results. My initial impression of code reviews is that they were heavy and bloated and a waste of time. I was wrong on all counts. Here are some interesting facts I found in the research: Using two inspectors is as effective as using four or more. Using an asynchronous process is as effective as a meeting based process for code reviews. For design reviews a meeting based approach may be more effective. One review session is as effective as multiple inspection sessions. Scenario or perspective based reviews were more effective than add-hoc and check-list based reviews. Inspections are effective. Inspecting upstream process like requirements and designs is very effective. The implication is that the complexity and overhead of Fagan like code inspections are not needed and a semi- automated method will yield improved system quality. That's where perspective based code reviews come in. A perspective based code review isn't just a few arbitrary people glancing at code. From Are the Perspectives Really Different? Perspective-Based Reading (PBR) is a scenario-based inspection technique where several reviewers read a document from different perspectives (e.g. user, designer, tester). The reading is made according to a special scenario, specific for each perspective. The basic assumption behind PBR is that the perspectives find different defects and a combination of several perspectives detects more defects compared to the same amount of reading with a single perspective. A perspective-based review is intended to: focus the responsibility of each inspector minimize the overlap among reviewers provide a subject matter "expert" The idea is simple. Every line of code belongs to a module. The module scope defines a set of perspectives that must reviewed for any code in that module. A set of stakeholders qualified to review on that perspective are defined for each module. Module scope defines the minimum number of total reviewers and the minimum number of reviewers for each perspective. This is all easily automatable with a little tooling. A module should be some coherent body of code, like device drivers, front-end, database, etc. A perspective is some quality the code must possess to be considered acceptable to be included in the code base. In my example, since it was a real-time embedded system written in C++, the perspectives include topics like semaphore usage and memory management. This is a very freeing approach in a way. You aren't told to look at a chunk of code and "find a problem." That's almost impossible. There are too many things to keep in mind. You are told to look at a chunk of code from a very particular perspective and only that perspective. And you are told what to look for. So you can concentrate on finding very specific problems, which people are pretty good at. Here's an example. Example Semaphore Code Review When a person--not necessarily a developer, you can imagine PMs and even managers are included for some reviews--is assigned to perform a semaphore perspective review on a code change, here is what they would consider when looking at the change. Semaphore Code Review Guidelines: Ensure the semaphore is required. If it is not used to provide mutual exclusion or synchronization, consider removing it. The Semaphore should be associated with a data structure or set of related data structures and be named appropriately. Ensure that all critical sections are correctly protected by the Semaphore. Minimize the scope of the critical region. Ensure the semaphore can never be left locked inadvertently. Consider the appropriate use of LockGuard. Check for external calls from within a critical region. Be thorough, as this might not be readily apparent. A code browser is extremely useful when analyzing unfamiliar code. If an external call is unavoidable, the implications must be understood and should be acceptable. They include: The possibility of blocking on attempted acquisition of external Semaphores. The possibility of acquiring external Semaphores and possibly blocking other tasks. In general, subsystems should be decoupled as much as possible. Check for calls into the subsystem which may acquire a semaphore. Is the potential duration of the blocking a known and acceptable behavior? This may not be readily apparent. A code browser is recommended for the analysis of unfamiliar code. Here Are The Semaphore Usage Guidelines: Granularity. A mutex is intended to provide exclusive access to a specific data structure. If it protects multiple structures or no apparent structure, consider refactoring. In general, the finer the granularity, the better. Scope. The scope of a mutex should be as narrow as possible. It should protect only the critical section. In general, the smaller the scope, the better. Coupling. Making an external call while holding a mutex is perilous. Examples of external calls include persistence requests, calls to a messaging system (especially synchronous calls), and direct invocation of a method on another subsystem. Holding a mutex while making such a call introduces opportunities for the simultaneous acquisition of multiple mutexes, long and indeterminate blocking times and, possibly, priority inversion. The resultant tight coupling of tasks can create run-time behavior that is undesirable and very difficult to reproduce and correct. You can probably imagine a number of different perspectives for your kind of system. There's a lot more to be said on the subject, but I'm sure you're tired of it by now. Require A Security Perspective In Your Code Review In retrospect I'm embarrassed to admit that I did not include a security perspective for any review. In my defense it was a more naive time. What's clear now is each and every code change should be reviewed for security flaws. Another admission: I have no idea what a security review would even look like. I imagine it would be module specific. A core developer isn't likely to be an expert in SQL injection attacks, for example. Nor are most people going to be encryption experts. So considerable thought must go into what this means and how it works. I'd like to hear other people's ideas on the subject. Reviewing An Existing Code Base A code review process works for new changes, but what about the pile of code you've already developed?That is a problem. One solution would be to review chunks of code, in some sort of priority order, over time. The automated system could schedule code reviews for existing unchanged code in such a way that complete code coverage could be achieved without out overwhelming developers. Would It Work? I think a security perspective review on each and every code change would be a huge deterrent to the problem of a lone attacker inserted into an organization. They would know the chances of passing a hack through the entire process would be low (assuming a tight process). And it would be much more difficult to corrupt a small group of people to let a backdoor through. Something to think about anyway. Code Review Standard For the hard core among you here's the code review standard I developed. It might be a good template to adapt for your situation. Every Line Of Code Must Be Reviewed Before It Is Checked-In Do not allow code to be checked-in unless it has been reviewed, fixed, and rereviewed. If you can't do this then your review process isn't fast or light enough. It can be done. Reviewing code after it has checked-in is next to useless as everyone is exposed to the unreviewed code. Code Must Be Integrated, Compiled, Unit Tested, And System Tested Before Review Spending time on code that is just going to change or doesn't work is a massive waste of time. The entrance requirements for a code review are: Code should compile without errors or warnings (according to coding standard). Code should be already be integrated with its parent branch. Code should be fully unit tested. Code should already pass the system tests where possible. Develop Perspective Based Reviews For the issues that are important in your software consider developing a Perspective Review for each issue. Perhaps special hotspots for you are internationalization, memory corruption, memory usage, threads, semaphores, etc. An example can be found in Semaphore Perspective Code Review. You can then use the perspectives to assign roles to review team members. Use Meetingless Asynchronous Reviews You don't need to hold a big meeting for a review. Everyone can review the code when they can find time. Simply have a done-by date when the review must be completed. Everyone on the review must review the code. If a person can't perform the review then the team needs to elect someone else to perform a similar role in the review. Meetingless asynchronous reviews are fast and light, yet they are still effective. With the right tool support you can easily review every line of code in your system. Use Between 2 And 5 Relevant Reviewers Too many reviewers wastes everyone's time. Keep the number of reviewers small and relevant to the code being reviewed. Assign Reviewers Roles And Perspectives It's almost impossible to review a lot of different issues in a lot of code. People get tired and they stop noticings things. The way around the getting tired issue is to use perspective reviews. Create a perspective for each important issue category your are concerned about. Assign the perspectives to people on the review team. Because they are only reviewing for issues in their perspective they will do a better job because they can stay more concentrated. This doesn't mean they can't find other issues as well, but they responsible for their perspective. For example, if using semaphores correctly is important in your software and the code has semaphores, then assign someone the role of reviewing semaphore usage. An example can be found in Semaphore Perspective Code Review. All Review Communication Should Go To The Review List And Be Logged Part of the benefit of a review is that people learn about the system being reviewed. This learning feature is facilitated by broadcasting email communication between the review team and saving all communication so it can be read by other people later. Do Not Redesign In The Review Make a note and schedule design issues for for a later time. Developers always think they can do stuff better. Take these issues off line unless the issue is that requirements are not being met. Requirements not being met is not the same as you could have done it better. Do Not Cover Coding Standards Violations In The Review Send violations via email or in person. Talking about violations only gets everyone angry and is a waste of time. Code Is Rereviewed Until It Passes Code isn't reviewed once and then forgetten. Any changes made have to be rereviewed. If you think this is too slow then your process isn't light enough. No reviewing all changes makes the process useless as people will just ignore suggestions or introduce new bugs in any changes. All Issues Must Be Fixed, Marked As Not An Issue, Or Marked As Bug Any issue brought up to a developer must be handled. A developer just can't ignore issues because they think it's stupid. Every issue must: Fixed. Marked as Not an Issue. If the developer and the reviewer can't decide between themselves if an issue should be fixed or not, then the review team gets to decide. If there is only one reviewer then bring a manager in or another team member. Automate Your Code Review System You can make your process light enough by building it into your build system. If your process isn't light enough work on until it is. Review All Code On Private Branch Before A Merge Code developed on a private branch doesn't need to reviewed during development. But before the code is merged into a parent branch all code changes must go through the complete review process. For this reason, development of a large scale feature, may still want to perform reviews on the private branch because that can speed up the merge process. Of course, try not to have branches separate from the mainline, but for large features that take a long time to develop you will often need separate branches. Review The Right Scope Of Changes You don't have to review every line of code in every module that has changed. Certainly if a module is new it must be completely reviewed. Other than that you may be able to just review the changed code. Though just reviewing changed code isn't always possible. If you are performing a semaphore perspective review, for example, then you will need to go look at the code within in the scope of the semaphore as well. Stick To Reviewable Issues Develop your list of what issues can be reviewed and how they are to be reviewed. Usually this is in the form of check lists and perspective reviews. Don't allow reviews on other items without changing what can be reviewed. Otherwise people spend endless time on off-topic arguments. Review Team Responsible For Deciding Issues If there is a conflict on any part of the review then the review team is responsible for handling it. That's the only way the review process will be light enough to work. Keep It Cool Nobody is perfect. The attitude of the review should never be personal, it should always be professional, with the goal of improving the system and the people building the system. Keep your tempers. Don't blame people for bugs. Work together to make things better. No finger pointing! Not ever! Meetingless reviews can help keep the anger down, but it can make it worse too. When people are in the same room anger can ramp up really quick. And we know in email it's very easy to say something that can be take wrong. Raise the awareness of these issues in your team. A good rule is to Never Assume An Attack. If you find yourself getting angry, assume it's a misunderstanding, not an attack. No Managers Unless a manager has something to add to the reviews then they shouldn't be involved. Issues should be decided by the review team. Managers always have to run to a different meeting, they don't have time slots open for meetings, and they generally don't add technical input. So you don't need managers as part of the review process. If A Bug Was Not Caught In A Review Figure Out Why If a bug happens after a review then track down why each bug wasn't found and then change your development process somehow to try and prevent that bug in the future. This is not always possible as running full tests are often impossible, but it should be mostly possible. I would create a bug for each bug to track down why it wasn't caught. Because this issue is more serious than just the review. It means the unit test, the system test, and the review did not work. Review Upstream Documents Too Reviewing product requirement documents, specs, standards, etc can provide excellent return on value. Make sure those products are reviewed as well. Feed Leasons Learned Into The Team And Documentation If issues come up during the review that everyone in the team would benefit from, then have a way to make wisdom public. I would recommend a development wiki where you can write documentation on anything useful that people come up with. Plug Reviews Into Your Source Code Control System I have done this through change check-in comments. Each change has to be reviewed before it is checked-in. The submit comment must contain a review ID that points to some document containing the review status for the change that is about to be submitted. The code is prevented from being submitted without a valid passed review. If you are able to automate your code review system all this works quite quickly and painlessly. AuthorTodd Hoff | Comment2 Comments | PermalinkPermalink | Share ArticleShare Article Print ArticlePrint Article Email ArticleEmail Article in CategoryStrategy Reader Comments (2) There is one lesson here. Review the commits and automate the build and config process, similarly reviewing ii. Anyone trusted can get around everything here. Although code review helps find bugs, if it is trivial to insert code after review and is not going to stop the intentional back door at all. Every thing in this article are great ideas but mostly don't have anything to do with back doors. March 27, 2016 | Unregistered CommenterMianos Hey Mianos, I think this is the advantage of a centralized source control system like Perforce. It's possible to prevent all checkins that are not accompanied by a valid code review. I don't know enough about git to know if this is possible. But there would be no way to insert code after a review at all. In Perforce, for example, all submits can be intercepted and rejected or accepted. That doesn't guaranteed back doors won't be inserted, of course, but in a defense in depth strategy it's a very high wall. And if your build system runs automatic security checks as part of the build process the wall gets a little higher. If your test system does some automated penetration testing your wall gets even a little higher.

QUESTION 77 Which of the following is potential architectural security flaw? A. Buffer overflow B. Unhandled exceptions C. Misuse of cryptography D. Duplicated session identifiers

Correct Answer: C Explanation/Reference: Opinion: Software [in]security -- software flaws in application architecture Software defects that lead to security problems come in two major flavors -- bugs in the implementation and flaws in the design. A majority of attention in the software security marketplace (too much, we think) is devoted to finding and fixing bugs, mostly because automated code review tools make that process straightforward. But flaws in the design and architecture of software account for 50% of security defects (see McGraw's 2001 book Building Secure Software for more on this.) In this article, we'll explain the difference between bugs and flaws. More importantly, we'll describe an architecture risk analysis (ARA) process that has proven to be useful in finding and fixing flaws. What is the difference between a bug and a flaw? Perhaps some examples can help. Bugs Bugs are found in software code (source or binary). One of the classic bugs of all time, the buffer overflow, has at its root the misuse of certain string handling functions in C. The most notorious such functions is gets() -- a system call that gets input from a user until the user decides to hit return. Imagine a fixed size buffer or something like an empty drinking glass. Then imagine that you set up things to get more input than fits in the glass (and the attacker is "pouring"). If you pour too much water into a glass and overfill it, water spills all over the counter. In the case of a buffer overflow in C, too much input can overwrite the heap or even overwrite the stack in such a way as to take control of the process. Simple bug. Awful repercussions. (And in the case of gets (), particularly easy to find in source code.) Hundreds of system calls exist in C that can lead to security bugs if they are used incorrectly, ranging from string handling functions to integer overflow and integer underflow hazards. And there are just as many bugs in Java and other languages. There are also common bugs in Web applications (think cross-site scripting or cross-site request forgery) and bugs related to databases (like SQL injection). There's an endless parade of bugs (and, by the way, there are way more than ten). The fact is, there are so many possible bugs that it makes sense to adopt and use a tool to find them. The many commercial source code review tools available include HP's Fortify, IBM AppScan Source, Coverity Inc.'s Quality Advisor, and Klocwork Inc.'s Clocwork Insight. The latest twist in source code review is to integrate bug finding directly into each developer's integrated development environment (IDE), so that bugs are uncovered as close to conception as possible. For example, Cigital Inc.'s SecureAssist does this. Flaws At the other end of the defect spectrum we find flaws. Flaws are found in software architecture and design. Here's a really simple flaw example. Ready? Forgot to authenticate user. That kind of error of omission will usually not be found with code review. But it can be a serious problem. Does your process run as root? Better be darn sure who's using it! Other examples of flaws include "attacker in the middle" problems that allow tampering or eavesdropping between components, layers, machines, or networks; and "replay attack" problems that have to do with weak protocols. To flesh things out a bit, here is a list of some common Java-related flaws: misuse of cryptography, compartmentalization problems in design, privileged block protection failure (DoPrivilege()), catastrophic security failure (fragility), type safety confusion error, insecure auditing, broken or illogical access control (RBAC over tiers), method over-riding problems (subclass issues), too much trust in (client-side) components that should not be trusted. (For more on these issues, see McGraw's ancient book Securing Java.) Flaws are just as common as bugs. In fact, in most studies, bugs and flaws divide the defect space 50/50. Of course we're really talking about a continuum. There are some tricky cases that may be categorized as both a bug and a flaw depending on how you look at it. But, in general, making a distinction between bugs and flaws is a useful exercise. Simply put, if we're going to solve the software security problem, we're going to need to focus more attention on flaws.

QUESTION 128 Which of the following cloud modules places the MOST responsibility for controlling security posture on the cloud customer? A. Desktop as a Service (DaaS) B. Platform as a Service (PaaS) C. Software as a Service (SaaS) D. Infrastructure as a Service (IaaS)

Correct Answer: D Explanation/Reference: In IaaS the customer has the most responsibility and control.

QUESTION 198 Which of the following is the FIRST activity when performing a software risk analysis? A. Business Impact Analysis (BIA) B. Likelihood analysis C. Security impact analysis D. Threat analysis

Correct Answer: D Explanation/Reference: Risk analysis is often viewed as a "black art"—part fortune telling, part mathematics. Successful architecture risk analysis, however, is nothing more than a business-level decision-support tool: it's a way of gathering the requisite data to make a good judgment call based on knowledge about vulnerabilities, threats, impacts, and probability. Established risk-analysis methodologies possess distinct advantages and disadvantages, but almost all of them share some good principles as well as limitations when applied to modern software design. What separates a great software risk assessment from a merely mediocre one is its ability to apply classic risk definitions to software design and then generate accurate mitigation requirements. A high-level approach to iterative risk analysis should be deeply integrated throughout the software development life cycle.1 In case you're keeping track, Figure 1 shows you where we are in our series of articles about software security's place in the software development life cycle. Traditional terminology Example risk-analysis methodologies for software usually fall into two basic categories: commercial (including Microsoft's STRIDE, Sun's ACSM/SAR, Insight's CRAMM, and Synopsys' SQM) and standards- based (from the National Institute of Standards and Technology's ASSET or the Software Engineering Institute's OCTAVE). An in-depth analysis of all existing methodologies is beyond our scope, but we'll look at basic approaches, common features, strengths, weaknesses, and relative advantages and disadvantages. As a corpus, "traditional" methodologies are varied and view risk from different perspectives. Examples of basic approaches include financial loss methodologies that seek to provide a loss figure to balance against the cost of implementing various controls; mathematically derived "risk ratings" that equate risk with arbitrary ratings for threat, probability, and impact; and qualitative assessment techniques that base risk assessment on anecdotal or knowledge-driven factors. Each basic approach has its distinctly different merits, but they almost all share some valuable concepts that should be considered in any risk analysis. We can capture these commonalities in a set of basic definitions: The asset, or object of the protection efforts, can be a system component, data, or even a complete system. Risk, the probability that an asset will suffer an event of a given negative impact, is determined from various factors: the ease of executing an attack, the attacker's motivation and resources, a system's existing vulnerabilities, and the cost or impact in a particular business context. The threat, or danger source, is invariably the danger a malicious agent poses and that agent's motivations (financial gain, prestige, and so on). Threats manifest themselves as direct attacks on system security. A vulnerability is a defect or weakness in system security procedure, design, implementation, or internal control that an attacker can compromise. It can exist in one or more of the components making up a system, even if those components aren't necessarily involved with security functionality. A given system's vulnerability data are usually compiled from a combination of OS- and application-level vulnerability test results, code reviews, and higher-level architectural reviews. Software vulnerabilities come in two basic flavors: flaws (design-level problems) or bugs (implementation-level problems). Automated scanners tend to focus on bugs, since human expertise is required for uncovering flaws. Countermeasures or safeguards are the management, operational, and technical controls prescribed for an information system that, taken together, adequately protect the system's confidentiality, integrity, and availability as well as its information. For every risk, a designer can put controls in place that either prevent or (at a minimum) detect the risk when it triggers. The impact on the organization, were the risk to be realized, can be monetary or tied to reputation, or it might result in the breach of a law, regulation, or contract. Without a quantification of impact, technical vulnerability is hard to handle—especially when it comes to mitigation activities. Probability is the likelihood that a given event will be triggered. It is often expressed as a percentile, although in most cases, probability calculation is extremely rough. Although they start with these basic definitions, risk methodologies usually diverge on how to arrive at specific values. Many methods calculate a nominal value for an information asset, for example, and attempt to determine risk as a function of loss and event probability. Others rely on checklists of threats and vulnerabilities to determine a basic risk measurement

QUESTION 196 Which of the following is MOST likely included in an Implement a bug tracking system for the new version? A. List of supported versions B. List of workarounds for problems C. Method for receiving upgrades D. Statement of future compatibility

Correct Answer: B Explanation/Reference: Bugs: 1. Blocker - Reserved for catastrophic failures - exceptions, crashes, corrupt data, etc. that (a) prevent somebody from completing their task, and (b) have no workaround. These should be extremely rare. They must be fixed immediately (same-day) and deployed as hotfixes. 2. Critical - These may refer to unhandled exceptions or to other "serious" bugs that only happen under certain specific conditions (i.e. a practical workaround is available). No hard limit for resolution time, but should be fixed within the week (hotfix) and must be fixed by next release. They key distinction between (1) and (2) is not the severity or impact but the existence of a workaround. 3. Major - Usually reserved for perf issues. Anything that seriously hampers productivity but doesn't actually prevent work from being done. Fix by next release. 4. Minor - These are "nuisance" bugs. A default setting not being applied, a read-only field showing as editable (or vice-versa), a race condition in the UI, a misleading error message, etc. Fix for this release if there are no higher-priority issues, otherwise the following release. 5. Trivial - Cosmetic issues. Scroll bars appearing where they shouldn't, window doesn't remember saved size/ location, typos, last character of a label being cut off, that sort of thing. They'll get fixed if the fix only takes a few minutes and somebody's working on the same screen/feature at the same time, otherwise, maybe never. No guarantee is attached to these. Change Requests (Features) 2. Design Errors - As in, we misunderstood the spec and the feature doesn't do what it's supposed to do at all, or does it so poorly that it's unusable. This is the highest priority for a change request, but it is still a priority 2 - a change request can never take precedence over a blocker bug. These must be fixed by next release. 3. Important - Significant cost savings (or profit potential, depending on the kind of software), major performance enhancements, anything that will make the app more awesome. Or anything escalated to this level by management - this is the highest escalation level for a feature. Expected to go into next release but may be cut in the event of a time or cash crunch. 4. Normal - Basically something that a lot of people want (or one VIP wants), and the rationale for it is clear and reasonable, but there's nothing special about it that would warrant it taking precedence over any other run-of- the-mill request. Minor performance tweaks, add this field or this button or this report, make this sort as a number instead of alphabetically, that kind of thing. These get assigned to the next release, but are the first to get cut if any delays come up. 5. Low-Impact - Layout changes, wording changes, changes that might conflict with the baseline requirements, pie-in-the-sky features that could take months to implement, etc. These automatically get assigned to a future release unless we're not working on anything more important (which is... never). Often when the next release does go out, these will get deferred to an even further release as more important requests pile up. 6. Optional - We don't actually call it optional (I think officially it's "Time Permitting"), but that's pretty much what this is. Generally reserved for two classes of change: (a) dumb requests that we know will just annoy most people ("show a confirmation dialog every time the user tries to click this button"), and (b) internally-conceived features that we can't cost-justify above official requests. Not assigned to any release.

QUESTION 131 Which of the following MUST be taken at the end of a system's life? A. Dispose of code, documentation, and data while retaining audit trails B. Dispose of code, documentation, data, and audit trails according to established policies C. Dispose of data while retaining code, documentation, and audit trails for future use D. Dispose of code, documentation, data, and audit trails immediately

Correct Answer: C Explanation/Reference: The main activities of retirement are: Analyze system interactions. Most systems do not exist in isolation but rather interact with others systems in some fashion. As part of your retirement effort, you must identify the interactions, analyze them, and address each one via legacy integration modeling. Determine retirement strategy. Your strategy depends on your situation. First, if this is a new release of your system, you may need to perform simple data conversions, perhaps applying the collection of database refactorings implemented during the development cycle for this release as well as the new version of the application code. Second, when replacing an system with one developed in-house the team developing the replacement system will be responsible for architecting, designing, and implementing the interactions. They should do so following guidance from your enterprise architects, and the results of the legacy analysis, and seek to minimize the impact on other systems. Third, when replacing with a COTS system you will likely want to extend the COTS product, implying that you may opt to engage the COTS vendor to develop the additionally functionality you need. Fourth, completely retiring a system is the hardest case because you need to rework the external systems and may even cause the retirement of other systems because you have removed critical functionality which they cannot do without. You must work with the owners of the other systems and assist them in finding other sources for the information they need. Update the documentation. You will need to update a wide range of documentation, including your operations and support procedures, enterprise architecture models, system portfolio documentation, administration documentation, and system overview documentation (to note that it's been retired). Test. You must test your migration tools in a similar manner that you would test a system during the Transition phase. If you are following the evolutionary approach of the EUP you will be testing your migration and removal tools as you develop them. While it is desirable, you may not have the luxury of a test environment that supports all of your system, including all your data. If that is the case, make sure that the data you do have is a representative sample of the entire system. Migrate users. You will most likely not be able to simply turn off access to a system one day and be done with it. You must accommodate your end users by notifying them appropriately of the upcoming retirement and assisting them in migrating to other systems. Archive. The existing data, code, documentation, and other system artifacts must be properly archived so that it may be restored at a future date if required. System removal. This is often a complex task, as you must migrate and convert data, turn off access and remove all vestiges of the retiring system, and update all the others systems that interacted with the retiring system. It's a good idea to take a complete backup of your system before you begin just in case you need to recreate the system in a hurry.

QUESTION 184 Which of the following documents can include text that legally obligated software suppliers to fix security defects identified after deployment? A. Service Level Agreements (SLA) B. Software license agreements C. Software procurement contracts D. Request for Proposal (RFP) documents

Correct Answer: C Explanation/Reference: Nearly all IT projects require some sort of procurement, whether it is for hardware, software, or services. Therefore, a solid understanding of procurement contracts is an important--and often overlooked--qualification for IT project managers. This Research Byte is a summary of our full report, How to Evaluate IT Procurement Contracts. IT managers make two common mistakes in regards to contracts. First, they often treat contracts as legal matters only and delegate contract review to corporate legal counsel. Although legal review is essential, lawyers often do not have the technical or operational background to evaluate IT contracts from a business perspective. A lawyer may not anticipate what could go wrong in a software implementation, nor might he or she think to recommend a clause specifying, for example, the buyer's right to request a change in the vendor's personnel at no charge within an initial time period. Procurement contracts are too important to delegate to the legal department alone. Instead, read through every contract from a business perspective first, and then route the contract through the legal department for final review. Second, IT managers are often too conservative in their approach to requesting changes to contract language. Vendors of IT products and services generally have their own boilerplate agreements, which they present to buyers as their "standard contract." Many project managers assume that such contracts are not easily modified. This is a mistake. Such boilerplate language almost always favors the seller, and any contract is an agreement between two parties. The buyer has as much right to suggest language for the agreement as the seller does. Never assume that contract language is cast in stone. Please note that the information provided in this report is not to be construed as legal advice. Each procurement contract requires legal advice from a competent lawyer to ensure that it is appropriate for the buyer's situation. Typical Contract Elements Understanding procurement contracts begins with a knowledge of what such contracts have in common. The Uniform Commercial Code (UCC), created in 1951, established eleven articles that all U.S. states (except Louisiana) follow. (Since Louisiana's legal code is based on civil law and not common law, the state has elected to only follow some of the UCC articles.) All commercial procurements are subject to UCC code--except for government contracts, which fall under the Federal Acquisition Regulation (FAR) rules. Figure 1 shows the major sections that generally exist in all contracts. These sections may not appear in the same sequence in all contracts, or they may appear under different headings than those shown, but they generally appear in most procurement agreements. Let us review each of these main sections. The statement of work defines the scope of the agreement. It is a complete description of the work to be done and the requirements to be satisfied under the procurement. It generally appears early in the contract. For example, the contract for a new computer server might include a statement of work that specifies the scope of the contract, including delivery and complete installation of the new equipment. Following closely after or combined with the statement of work are item specifications. If the supplier is to build or provide a tangible product, the specification is the definition of the technical requirements for the product being procured. Generally there are three types of specifications found in contracts. First, design specifications describe the problem to be solved or the requirements to be fulfilled by the product. Second, functional specifications describe what the product must do. It is the blueprint for the design of the product from the user's perspective. Finally, performance specifications provide an understanding of the required level of performance for the product, including specific metrics and tolerances. Under UCC Article 2-513, the buyer has the right to inspect the goods prior to making payment. The inspection must be made in a reasonable amount of time to allow the seller to make adjustments if necessary. Therefore, it may be prudent to set up a schedule of specific dates for testing and inspection in the contract, especially if these activities will take place at the seller's location. Reasonable penalties can be established for failure to comply. Whether the vendor is supplying the customer with one item or multiple items, having a delivery schedule in the contract holds the vendor accountable to the customer's project timeline. The contract can also specify penalties on the seller for failure to meet delivery schedules. Warranties can either be expressed or implied. UCC Articles 2-313 and 2-314 discuss each type of warranty. An express warranty, one which is spelled out in the contract, is stronger than an implied warranty, which is generally assumed in procuring products and services of the type under consideration. The warranty section of the contract may also disclaim any warranties express or implied. It is important to establish governing law within the contract. If the contract is between two parties within the United States and one party is not a U.S. government agency, then the UCC articles will govern the contract. If the seller's home base is outside of the U.S., it is prudent to have language in the contract detailing which governing body's law and regulations will control the contract. The order of precedence section sets forth the rank order of procurement documents in the event that there are conflicts in the language of individual documents. For example, in the procurement of a software application, the buyer may generate a request for proposal (RFP), the seller may prepare an RFP response, and the two parties may ultimately agree on a procurement contract. The order of precedence clause of the contract may then specify that the contract terms and conditions override the RFP, and the original RFP overrides the seller's RFP response. The title transfer section lays out the details for when and if title passes from the seller to the buyer for the items that have been procured. For example, a contract for 500 new personal computers might specify that title transfers to the buyer upon completion of testing. Companies always have the right to terminate a contract for a "default" in performance, which is a form of breach of contract. To protect the buyer's interests, it may be wise to have a clause which allows the buyer to terminate a contract for its own convenience. For example, a contract might specify that either party may terminate the contract with 30 days written notice. Or, termination may only be permitted in the event of a serious breach in performance. A termination section should also include what rights and responsibilities are to continue in the event of contract termination, such as the need to protect the trade secrets that may have been disclosed during the course of the relationship. Arbitration is usually less costly than litigation but considerably more costly than mediation. If arbitration is will be used to settle disputes, an arbitration clause should be inserted in the contract specifying the intent of all parties, like whether the arbitration will be voluntary or binding. Charge-backs are generally costs that the rightful responsibility of the seller--such as to repair defective items-- but which by prior agreement will be incurred by the buyer and charged back to the seller. The charge-back section may also include a definition of what charges may not be charged back to the seller. The payment schedule defines the terms and conditions for the buyer's payments to the seller. For example, the payment schedule for a custom-developed system may call for a certain amount to be paid up front, with another amount due upon delivery, and final payment due 30 days after completion of user acceptance testing. There are many ways to pay for goods and services.

QUESTION 134 Which of the following is the STRONGEST form of authentication when used alone? A. User-defined challenge question and answer B. Personal Identification Numbers (PIN) C. Passwords compose of letters and numbers D. Internet Protocol (IP) addresses of users

Correct Answer: C Explanation/Reference: User-defined challenge questions and answers, such as mother's maiden name, can be discovered by research.

QUESTION 76 Which of the following is a standard and accepted mechanism to avoid Structured Query Language (SQL) Injection in source code? A. Parametric polymorphism B. Query concatenation C. Query logging D. Parameterized queries

Correct Answer: D Explanation/Reference: A parameterized query (also known as a prepared statement) is a means of pre-compiling a SQL statement so that all you need to supply are the "parameters" (think "variables") that need to be inserted into the statement for it to be executed. It's commonly used as a means of preventing SQL injection attacks

QUESTION 43 A requirement document states that a new system must utilize Bell-LaPadula. What is the PRIMARY concern this requirement addresses? A. Disclosure protection B. Data integrity C. Default controls D. Data availability

Correct Answer: A Explanation/Reference: The Bell-LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell- LaPadula model is built on the concept of a state machine with a set of allowable states in a computer system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the security level) to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines two mandatory access control (MAC) rules and one discretionary access control (DAC) rule with three security properties: The Simple Security Property states that a subject at a given security level may not read an object at a higher security level. The * (star) Property states that a subject at a given security level may not write to any object at a lower security level. The Discretionary Security Property states that use of an access matrix to specify the discretionary access control. The transfer of information from a high-sensitivity document to a lower-sensitivity document may happen in the Bell-LaPadula model via the concept of trusted subjects. Trusted Subjects are not restricted by the Star- property. Trusted Subjects must be shown to be trustworthy with regard to the security policy. This security model is directed toward access control and is characterized by the phrase: "read down, write up." Compare the Biba model, the Clark-Wilson model and the Chinese Wall model. With Bell-LaPadula, users can create content only at or above their own security level (i.e. secret researchers can create secret or top-secret files but may not create public files; no write-down). Conversely, users can view content only at or below their own security level (i.e. secret researchers can view public or secret files, but may not view top-secret files; no read-up). The Bell-LaPadula model explicitly defined its scope. It did not treat the following extensively: Covert channels. Passing information via pre-arranged actions was described briefly. Networks of systems. Later modeling work did address this topic. Policies outside multilevel security. Work in the early 1990s showed that MLS is one version of boolean policies, as are all other published policies. Strong Star Property The Strong Star Property is an alternative to the *-Property, in which subjects may write to objects with only a matching security level. Thus, the write-up operation permitted in the usual *-Property is not present, only a write-to-same operation. The Strong Star Property is usually discussed in the context of multilevel database management systems and is motivated by integrity concerns. This Strong Star Property was anticipated in the Biba model where it was shown that strong integrity in combination with the Bell-LaPadula model resulted in reading and writing at a single level. Tranquility principle The tranquility principle of the Bell-LaPadula model states that the classification of a subject or object does not change while it is being referenced. There are two forms to the tranquility principle: the "principle of strong tranquility" states that security levels do not change during the normal operation of the system. The "principle of weak tranquility" states that security levels may never change in such a way as to violate a defined security policy. Weak tranquility is desirable as it allows systems to observe the principle of least privilege. That is, processes start with a low clearance level regardless of their owners clearance, and progressively accumulate higher clearance levels as actions require it. Limitations Only addresses confidentiality, control of writing (one form of integrity), *-property and discretionary access control Covert channels are mentioned but are not addressed comprehensively The tranquility principle limits its applicability to systems where security levels do not change dynamically. It allows controlled copying from high to low via trusted subjects. The state-transition model does not contain any state invariants. The overall process may take more time.

QUESTION 94 Which of the following should be a PRIMARY security characteristic when choosing a programming language? A. Native garbage collection mechanisms B. Type safety mechanisms C. Immutable strings D. String handling libraries

Correct Answer: B Explanation/Reference: In computer science, type safety is the extent to which a programming language discourages or prevents type errors. A type error is erroneous or undesirable program behavior caused by a discrepancy between differing data types for the program's constants, variables, and methods (functions), e.g., treating an integer (int) as a floating-point number (float). Type safety is sometimes alternatively considered to be a property of a computer program rather than the language in which that program is written; that is, some languages have type-safe facilities that can be circumvented by programmers who adopt practices that exhibit poor type safety. The formal type-theoretic definition of type safety is considerably stronger than what is understood by most programmers. Type enforcement can be static, catching potential errors at compile time, or dynamic, associating type information with values at run-time and consulting them as needed to detect imminent errors, or a combination of both. The behaviors classified as type errors by a given programming language are usually those that result from attempts to perform operations on values that are not of the appropriate data type. This classification is partly based on opinion; it may imply that any operation not leading to program crashes, security flaws or other obvious failures is legitimate and need not be considered an error, or it may imply that any contravention of the programmer's explicit intent (as communicated via typing annotations) to be erroneous and not "type-safe". In the context of static (compile-time) type systems, type safety usually involves (among other things) a guarantee that the eventual value of any expression will be a legitimate member of that expression's static type. The precise requirement is more subtle than this — see, for example, subtype and polymorphism for complications. Type safety is closely linked to memory safety, a restriction on the ability to copy arbitrary bit patterns from one memory location to another. For instance, in an implementation of a language that has some type {\displaystyle t} t, such that some sequence of bits (of the appropriate length) does not represent a legitimate member of {\displaystyle t} t, if that language allows data to be copied into a variable of type {\displaystyle t} t, then it is not type-safe because such an operation might assign a non- {\displaystyle t} t value to that variable. Conversely, if the language is type-unsafe to the extent of allowing an arbitrary integer to be used as a pointer, then it is not memory-safe.Most statically typed languages provide a degree of type safety that is strictly stronger than memory safety, because their type systems enforce the proper use of abstract data types defined by programmers even when this is not strictly necessary for memory safety or for the prevention of any kind of catastrophic failure.

QUESTION 62 An organization has deployed a software product after completing the software development process. Which of the following is the BEST next step? A. Train staff on using the new version B. Promote the deployed version to the users C. Perform post deployment testing of the new version D. Implement a bug tracking system for the new version

Correct Answer: D Explanation/Reference: A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. It may be regarded as a type of issue tracking system.

QUESTION 66 What is the MAIN purpose of sampling when creating test data? A. Randomize the data to improve the quality of the testing B. Adjust user data to be used in lab server with small hard disks C. Reduce the size of the data set to improve the speed of later analysis D. Reduce redundancies in the data to speed up testing and later analysis

Correct Answer: D Explanation/Reference: Recall that statistical inference permits us to draw conclusions. about a population based on a sample. Sampling (i.e. selecting a sub-set of a whole population) is often done for reasons of cost (it's less expensive to sample 1,000 television viewers than 100 million TV viewers) and practicality (e.g. performing a crash test on every automobile produced is impractical). In any case, the sampled population and the target population should be similar to one another.

QUESTION 185 In software development, which of the following BEST represents weighing and categorizing based on potential impact to the business? A. Data classification B. Recovery classification C. Project classification D. System classification

Correct Answer: A Explanation/Reference: How to start process of data classification Note that this classification structure is written from a Data Management perspective and therefore has a focus for text and text convertible binary data sources. Images, videos, and audio files are highly structured formats built for industry standard API's and do not readily fit within the classification scheme outlined below. First step is to evaluate and divide the various applications and data into their respective category as follows: Relational or Tabular data (around 15% of non audio/video data) Generally describes proprietary data which can be accessible only through application or application programming interfaces (API) Applications that produce structured data are usually database applications. This type of data usually brings complex procedures of data evaluation and migration between the storage tiers. To ensure adequate quality standards, the classification process has to be monitored by subject matter experts. Semi-structured or Poly-structured data (all other non audio/video data that does not conform to a system or platform defined Relational or Tabular form). Generally describes data files that have a dynamic or non-relational semantic structure (e.g. documents,XML,JSON,Device or System Log output,Sensor Output). Relatively simple process of data classification is criteria assignment. Simple process of data migration between assigned segments of predefined storage tiers. Types of data classification - note that this designation is entirely orthogonal to the application centric designation outlined above. Regardless of structure inherited from application, data may be of the types below 1. Geographical : i.e. according to area (supposing the rice production of a state or country etc.) 2. Chronological: i.e. according to time (sale of last 3 months) 3. Qualitative : i.e. according to distinct categories. (E.g.: population on the basis of poor and rich) 4. Quantitative : i.e. according to magnitude(a) discrete and b) continuous Basic criteria for semi-structured or poly-structured data classification[edit] Time criteria is the simplest and most commonly used where different type of data is evaluated by time of creation, time of access, time of update, etc. Metadata criteria as type, name, owner, location and so on can be used to create more advanced classification policy Content criteria which involve usage of advanced content classification algorithms are most advanced forms of unstructured data classification Note that any of these criteria may also apply to Tabular or Relational data as "Basic Criteria". These criteria are application specific, rather than inherent aspects of the form in which the data is presented..

QUESTION 173 Which of the following MUST be done for security during requirements analysis? A. Develop security use cases B. Develop an overall security model C. Build Role Based Access Control (RBAC) model D. Establish the total cost of recovering from an abuse

Correct Answer: A Explanation/Reference: Capturing security requirements for software systems. Abuse frames are used to model threats while security problem frames are used to model security requirements

QUESTION 99 The designers of an Application Programming Interface (API) choose to design where all the parameters, including the authentication and authorization tokens, are presented at one time in the initialization step. They reject a proposed design that requires three separate steps for acquiring authentication tokens, authorization tokens, and object tokens from three different servers. They made this choice based on which of the following principles? A. Defense in depth B. Economy of mechanism C. Least common mechanism D. Reusing existing components

Correct Answer: B Explanation/Reference: Economy of Mechanism The terms security and complexity are often at odds with each other. This is because the more complex something is, the harder it is to understand, and you cannot truly secure something if you do not understand it. Another reason complexity is a problem within security is that it usually allows too many opportunities for something to go wrong. If an application has 4000 lines of code, there are a lot fewer places for buffer overflows, for example, than in an application of two million lines of code. As with any other type of technology or problem in life, when something goes wrong with security mechanisms, a troubleshooting process is used to identify the actual issue. If the mechanism is overly complex, identifying the root of the problem can be overwhelming, if not nearly impossible. Security is already a very complex issue because there are so many variables involved, so many types of attacks and vulnerabilities, so many different types of resources to secure, and so many different ways of securing them. You want your security processes and tools to be as simple and elegant as possible. They should be simple to troubleshoot, simple to use, and simple to administer. Another application of the principle of keeping things simple concerns the number of services that you allow your system to run. Default installations of computer operating systems often leave many services running. The keep-it-simple principle tells us to eliminate those that we don't need. This is also a good idea from a security standpoint because it results in fewer applications that can be exploited and fewer services that the administrator is responsible for securing. The general rule of thumb should be to always eliminate all nonessential services and protocols. This, of course, leads to the question, how do you determine whether a service or protocol is essential or not? Ideally, you should know what your computer system or network is being used for, and thus you should be able to identify those elements that are essential and activate only them. For a variety of reasons, this is not as easy as it sounds. Alternatively, a stringent security approach that one can take is to assume that no service is necessary (which is obviously absurd) and activate services and ports only as they are requested. Whatever approach is taken, there is a never-ending struggle to try to strike a balance between providing functionality and maintaining security.

QUESTION 116 Which of the following risk concepts is accurate? A. Threats can decrease countermeasures B. Threats can increase vulnerabilities to assets C. Vulnerabilities can be mitigated by countermeasures D. Countermeasures can be reduce threats to assets

Correct Answer: C Explanation/Reference: Controls (also called countermeasures or safeguards) are designed to control risk by reducing vulnerabilities to an acceptable level. (In this text, the terms control, countermeasure, and safeguard are considered synonymous and are used interchangeably.)

QUESTION 153 Storing client-side data using cookies may pose a risk if appropriate security controls are not used. Which of the following would BEST reduce the risk? A. Apply Rivest-Shamir-Adleman (RSA) private key encryption to the data B. Apply symmetric encryption to the data C. Add a secure hash to the data in the cookie D. Set the "Expires" option on the cookie

Correct Answer: C Explanation/Reference: If the encryption key is stored on the server, then only the server can decrypt the cookie, and only the server can make predictable changes to the cookie. An attacker can make changes to the cyphertext of the cookie, but they cannot know in advance what effect those changes will have. If the cookie additionally includes a message authentication code or other anti-tampering measure, then an attacker cannot make changes to an encrypted cookie without invalidating it.

QUESTION 117 Once attack vectors are identified, ratings are applied to indicate which of the following? A. Relevance B. Source C. Severity D. Recurrence

Correct Answer: C Explanation/Reference: The Common Vulnerability Scoring System (CVSS) is a free and open industry standard for assessing the severity of computer system security vulnerabilities. CVSS attempts to assign severity scores to vulnerabilities, allowing responders to prioritize responses and resources according to threat. Scores are calculated based on a formula that depends on several metrics that approximate ease of exploit and the impact of exploit. Scores range from 0 to 10, with 10 being the most severe. While many utilize only the CVSS Base score for determining severity, temporal and environmental scores also exist, to factor in availability of mitigations and how widespread vulnerable systems are within an organization, respectively.

QUESTION 179 Which of the methodologies is MOST likely to be used during white-box testing of a mission-critical software application? A. Fuzzing B. Syntax testing C. Static code analysis D. Monitoring program behaviour

Correct Answer: C Explanation/Reference: White box testing implies that the testers have the code and can do static testing, not just dynamic testing

QUESTION 169 Which of the following is the MAIN reason for performing a requirements analysis during the requirements phase of the Software Development Life Cycle (SDLC)? A. Anticipating future changes to the requirements B. Understanding the unit test that must be developed C. Resolving conflicts between requirements D. Preparing for ongoing changes in customer requests

Correct Answer: C Explanation/Reference: Requirements analysis, also called requirements engineering, is the process of determining user expectations for a new or modified product. These features, called requirements, must be quantifiable, relevant and detailed. In software engineering, such requirements are often called functional specifications.

QUESTION 111 Which of the following is the BEST example of Software as a Service (SaaS)? A. Service Oriented Architecture (SOA) B. Network Functional Virtualization (NFV) C. Web-based email D. A database server

Correct Answer: C Explanation/Reference: Webmail, Twitter, and CRM are prime examples of SaaS

QUESTION 181 How long should application activity logs be stored to reduce risk? A. Perpetually on the machine on which they were generated B. For a minimally acceptable period only on the machine on which they were generated C. Perpetually on a centralized log management server D. For a minimally acceptable period on a centralized log management server

Correct Answer: D Explanation/Reference: If you stored logs perpetually then you would run out of space.

QUESTION 95 Which of the following clauses provides the BEST product assurance in every vendor agreement? A. Vulnerability scanning B. Right to terminate C. Force majeure D. Right to audit

Correct Answer: D Explanation/Reference: "Right to audit" provisions in technology services agreements are common. You've seen them. A typical section will read something like this: Vendor will keep accurate and complete records and accounts pertaining to the performance of the Services. Upon no less than seven (7) days' written notice, and no more than once per calendar year, Customer may audit, or nominate a reputable accounting firm to audit, Vendor's records relating to its performance under this Agreement, including amounts claimed, during the term of the Agreement and for a period of three months thereafter. Clearly these provisions generally benefit the customer, to give it some transparency and assurance that the vendor is performing the services according to the agreement and that vendor is charging customer for the services appropriately.

QUESTION 122 Which of the following is the BEST defense against a Cross-Site Request Forgery (CSRF) attack? A. Use the Hypertext Transfer Protocol (HTTP) POST method instead of GET to submit data B. Use the HTTP GET method instead of POST to submit data C. Verify that the Referrer header matches the expected host D. Verify that requests include a secret, user-specific token

Correct Answer: D Explanation/Reference: Cross-Site Request Forgery (CSRF) is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user's web browser to perform an unwanted action on a trusted site for which the user is currently authenticated. The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the user's context. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser without knowledge of the target user, at least until the unauthorized transaction has been committed. Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Sites that are more likely to be attacked by CSRF are community websites (social networking, email) or sites that have high dollar value accounts associated with them (banks, stock brokerages, bill pay services). Utilizing social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by utilizing a Cross-Site Scripting flaw (ex: Samy MySpace Worm). Cross-Site Scripting is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat token, Double-Submit cookie, referer and origin based CSRF defenses. This is because an XSS payload can simply read any page on the site using a XMLHttpRequest and obtain the generated token from the response, and include that token with a forged request. This technique is exactly how the MySpace (Samy) worm defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate. XSS cannot defeat challenge-response defenses such as Captcha, re-authentication or one-time passwords. It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP XSS Prevention Cheat Sheet for detailed guidance on how to prevent XSS flaws. General Recommendations For Automated CSRF Defense We recommend two separate checks as your standard CSRF defense that does not require user intervention. This discussion ignores for the moment deliberately allowed cross origin requests (e.g., CORS). Your defenses will have to adjust for that if that is allowed. Check standard headers to verify the request is same origin AND Check CSRF token Each of these is discussed next. Verifying Same Origin with Standard Headers There are two steps to this check: Determining the origin the request is coming from (source origin) Determining the origin the request is going to (target origin) Both of these steps rely on examining an HTTP request header value. Although it is usually trivial to spoof any header from a browser using JavaScript, it is generally impossible to do so in the victim's browser during a CSRF attack, except via an XSS vulnerability in the site being attacked with CSRF. More importantly for this recommended Same Origin check, a number of HTTP request headers can't be set by JavaScript because they are on the 'forbidden' headers list. Only the browsers themselves can set values for these headers, making them more trustworthy because not even an XSS vulnerability can be used to modify them. The Source Origin check recommended here relies on three of these protected headers: Origin, Referrer, and Host, making it a pretty strong CSRF defense all on its own. Identifying Source Origin To identify the source origin, we recommend using one of these two standard headers that almost all requests include one or both of: Origin Header Referrer Header Checking the Origin Header If the Origin header is present, verify its value matches the target origin. The Origin HTTP Header standard was introduced as a method of defending against CSRF and other Cross-Domain attacks. Unlike the Referrer, the Origin header will be present in HTTP requests that originate from an HTTPS URL. If the Origin header is present, then it should be checked to make sure it matches the target origin. This defense technique is specifically proposed in section 5.0 of Robust Defenses for Cross-Site Request Forgery. This paper proposes the creation of the Origin header and its use as a CSRF defense mechanism. There are some situations where the Origin header is not present. Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referrer header will remain the only indication of the UI origin. Following a 302 redirect cross-origin. In this situation, the Origin is not included in the redirected request because that may be considered sensitive information you don't want to send to the other origin. But since we recommend rejecting requests that don't have both Origin and Referer headers, this is OK, because the reason the Origin header isn't there is because it is a cross-origin redirect. Checking the Referrer Header If the Origin header is not present, verify the hostname in the Referrer header matches the target origin. Checking the Referrer is a commonly used method of preventing CSRF on embedded network devices because it does not require any per-user state. This makes Referrer a useful method of CSRF prevention when memory is scarce or server-side state doesn't exist. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state which is required to keep track of a synchronization token. In both cases, just make sure the target origin check is strong. For example, if your site is "site.com" make sure "site.com.attacker.com" doesn't pass your origin check (i.e., match through the trailing / after the origin to make sure you are matching against the entire origin). What to do when Both Origin and Referrer Headers Aren't Present If neither of these headers is present, which should be VERY rare, you can either accept or block the request. We recommend blocking, particularly if you aren't using a random CSRF token as your second check. You might want to log when this happens for a while and if you basically never see it, start blocking such requests. Identifying the Target Origin You might think it's easy to determine the target origin, but its frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set. Determining the Target Origin When Behind a Proxy If you are behind a proxy, there are a number of options to consider: Configure your application to simply know its target origin Use the Host header value Use the X-Forwarded-Host header value It's your application, so clearly you can figure out its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side so is a trusted value. However, this can be problematic to maintain if your application is deployed in many different places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations can be difficult, but if you can do it, that's great. If you would prefer the application figure it out on its own, so it doesn't have to be configured differently for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referrer headers. However, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X- Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referrer header. Verifying the Two Origins Match Once you've identified the source origin (from either the Origin or Referrer header), and you've determined the target origin, however you choose to do so, then you can simply compare the two values and if they don't match you know you have a cross-origin request. CSRF Specific Defense Once you have verified that the request appears to be a same origin request so far, we recommend a second check as an additional precaution to really make sure. This second check can involve custom defense mechanisms using CSRF specific tokens created and verified by your application or can rely on the presence of other HTTP headers depending on the level of rigor/security you want. There are numerous ways you can specifically defend against CSRF. We recommend using one of the following (in ADDITION to the check recommended above): 1. Synchronizer (i.e.,CSRF) Tokens (requires session state) Approaches that do require no server side state: 2. Double Cookie Defense 3. Encrypted Token Pattern 4. Custom Header - e.g., X-Requested-With: XMLHttpRequest These are listed in order of strength of defense. So use the strongest defense that makes sense in your situation. Synchronizer (CSRF) Tokens Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks Characteristics of a CSRF Token Unique per user session Large random value Generated by a cryptographically secure random number generator The CSRF token is added as a hidden field for forms or within the URL if the state changing operation occurs via a GET The server rejects the requested action if the CSRF token fails validation In order to facilitate a "transparent but visible" CSRF solution, developers are encouraged to adopt the Synchronizer Token Pattern (http://www.corej2eepatterns.com/Design/PresoDesign.htm). The synchronizer token pattern requires the generating of random "challenge" tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and links associated with sensitive server-side operations. When the user wishes to invoke these sensitive operations, the HTTP request should include this challenge token. It is then the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. This is analogous to the attacker being able to guess the target victim's session identifier. The following synopsis describes a general approach to incorporate challenge tokens within the request. When a Web application formulates a request (by generating a link or form that causes a request when submitted or clicked by the user), the application should include a hidden input parameter with a common name such as "CSRFToken". The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.

QUESTION 100 Which of the following is the INITIAL step in secure software testing? A. Access the source code B. Design data collection method C. Define security test data D. Develop a security test plan

Correct Answer: D Explanation/Reference: During the development life cycle of a web application many things need to be tested, but what does testing actually mean? The Merriam-Webster Dictionary describes testing as: To put to test or proof. To undergo a test. To be assigned a standing or evaluation based on tests. For the purposes of this document testing is a process of comparing the state of a system or application against a set of criteria. In the security industry people frequently test against a set of mental criteria that are neither well defined nor complete. As a result of this, many outsiders regard security testing as a black art. The aim of this document is to change that perception and to make it easier for people without in-depth security knowledge to make a difference in testing.

QUESTION 140 Which phase of testing is MOST suitable for the final verification of an application prior to release in production? A. Integration B. Unit C. Performance D. User acceptance

Correct Answer: D Explanation/Reference: In software development, user acceptance testing (UAT) - also called beta testing, application testing, and end user testing - is a phase of software development in which the

QUESTION 114 What Software Development Life Cycle (SDLC) documentation MUST be completed before control identification can take place? A. Functional specification B. Business requirements C. Technical specification D. Project risk register

Correct Answer: D Explanation/Reference: The Importance of a Risk Register The risk register starts, of course, with a risk management plan. The project manager must seek input from team members as well as stakeholders and possibly even end users. The risk register or risk log becomes essential as it records identified risks, their severity, and the actions steps to be taken. It can be a simple document, spreadsheet, or a database system, but the most effective format is a table. A table presents a great deal of information in just a few pages. Managers should view the risk register as a management tool through a review and updating process that identifies, assesses, and manages risks down to acceptable levels. The register provides a framework in which problems that threaten the delivery of the anticipated benefits are captured. Actions are then instigated to reduce the probability and the potential impact of specific risks. Make your risk register visible to project stakeholders so they can see that risks are being addressed. They may flag risks you haven't identified and give other options for risk mitigation.

QUESTION 9 Symmetric cryptography algorithms are frequently preferred over asymmetric cryptography algorithms in embedded applications because they A. are more resistant to brute force attacks B. use well-known, open-source algorithms C. simultaneously provide integrity checking D. are computationally less demanding

Correct Answer: D Explanation/Reference: Symmetric cryptography is a thousand to ten thousand times faster than asymmetric encryption.

QUESTION 60 Which of the following MUST be included when reporting the security of a security vulnerability? A. Working exploit B. Recommend fix C. Steps to verify the vulnerability D. Risk Assessment (RA) computation

Correct Answer: D Explanation/Reference: The OWASP Risk Rating Methodology - Discovering vulnerabilities is important, but being able to estimate the associated risk to the business is just as important. Early in the life cycle, one may identify security concerns in the architecture or design by using threat modeling. Later, one may find security issues using code review or penetration testing. Or problems may not be discovered until the application is in production and is actually compromised. By following the approach here, it is possible to estimate the severity of all of these risks to the business and make an informed decision about what to do about those risks. Having a system in place for rating risks will save time and eliminate arguing about priorities. This system will help to ensure that the business doesn't get distracted by minor risks while ignoring more serious risks that are less well understood. Ideally there would be a universal risk rating system that would accurately estimate all risks for all organizations. But a vulnerability that is critical to one organization may not be very important to another. So a basic framework is presented here that should be customized for the particular organization. The authors have tried hard to make this model simple to use, while keeping enough detail for accurate risk estimates to be made. Please reference the section below on customization for more information about tailoring the model for use in a specific organization.

QUESTION 183 Which of the following is used to provide assurance that a cloud-based system is designed so that the system's security and its data are protected during a system failure? A. Access is only permitted to the system administrator and no other users until security controls are reestablished B. Unintended access points do not exist or can be readily identified and eliminated upon application reestablished C. Side channels do not expose cryptographic keys for authentication systems D. Permitting access to sensitive information is based on using exclusions instead of permissions for users

Correct Answer: A Explanation/Reference: When a system fails, it should do so securely. This typically involves several things: secure defaults (default is to deny access); on failure undo changes and restore to a secure state; always check return values for failure; and in conditional code/filters make sure that there is a default case that does the right thing. The confidentiality and integrity of a system should remain even though availability has been lost. Attackers must not be permitted to gain access rights to privileged objects during a failure that are normally inaccessible. Upon failing, a system that reveals sensitive information about the failure to potential attackers could supply additional knowledge for creating an attack. Determine what may occur when a system fails and be sure it does not threaten the system.

QUESTION 194 Which of the following is the BEST place for known vulnerabilities to be documented during secure software design? A. Defect tracking database B. Threat model document C. Project management report D. Software Quality Assurance (QA) report

Correct Answer: A Explanation/Reference: A bug tracking system or defect tracking system is a software application that keeps track of reported software bugs in software development projects. It may be regarded as a type of issue tracking system. Many bug tracking systems, such as those used by most open source software projects, allow end-users to enter bug reports directly. Other systems are used only internally in a company or organization doing software development. Typically bug tracking systems are integrated with other software project management applications. A bug tracking system is usually a necessary component of a good software development infrastructure, and consistent use of a bug or issue tracking system is considered one of the "hallmarks of a good software team

QUESTION 170 When developing a new application, which of the following MUST be developed before performing a security architecture technical review? A. Functional data flow model B. User provisioning plan C. Cryptographic Interface Control Document (ICD) D. Application configuration standards

Correct Answer: A Explanation/Reference: A picture is worth a thousand words. A Data Flow Diagram (DFD) is traditional visual representation of the information flows within a system. A neat and clear DFD can depict a good amount of the system requirements graphically. It can be manual, automated, or combination of both. It shows how information enters and leaves the system, what changes the information and where information is stored. The purpose of a DFD is to show the scope and boundaries of a system as a whole. It may be used as a communications tool between a systems analyst and any person who plays a part in the system that acts as the starting point for redesigning a system.

QUESTION 127 What is the MOST significant component of classifying and determining control for sensitive data within an international organization? A. Confidentiality must be preserved B. Integrity is critical to privacy protection C. Local laws override organizational considerations D. It is mandatory to keep encrypted backup copies of data

Correct Answer: A Explanation/Reference: A well-planned data classification system makes essential data easy to find and retrieve. This can be of particular importance for risk management, legal discovery, and compliance. Written procedures and guidelines for data classification should define what categories and criteria the organization will use to classify data and specify the roles and responsibilities of employees within the organization regarding data stewardship. Once a data-classification scheme has been created, security standards that specify appropriate handling practices for each category and storage standards that define the data's lifecycle requirements should be addressed.

QUESTION 124 Ambiguity analysis is a technique that allows security architects to do which of the following? A. Eliminate potential gaps between specification and development B. Mitigate vulnerabilities in software execution environment C. Eliminate potential attack patterns D. Mitigate the threats on deployment and production platforms

Correct Answer: A Explanation/Reference: An Ambiguity Review, developed by Richard Bender from Bender RBT, Inc., is a very powerful testing technique that eliminates defects in the requirements phase of the software life cycle, thereby avoiding those defects from propagating to the remaining phases of the software development life cycle. A QA Engineer trained in the technique performs the Ambiguity Review. The Engineer is not a domain expert (SME), and is not reading the requirements for content, but only to identify ambiguities in the logic and structure of the wording. The Ambiguity Review takes place after the requirements, or section of the requirements, reach first draft, and prior to them being reviewed for content, i.e. correctness and completeness by domain experts. The Engineer identifies all ambiguous words and phrases on a copy of the requirements. A summary of the findings is presented to the Business Analyst.

QUESTION 115 A new module for an existing web application has been developed. Functionality and usability testing has been completed in the development and test environment. From a security perspective, what is the MOST appropriate next step? A. Document all known vulnerabilities B. Schedule the module for deployment C. Determine web page loading times D. Update the firewall rules

Correct Answer: A Explanation/Reference: Application Security Vulnerability: Code Flaws, Insecure Code Understanding Application Vulnerabilities What is an Application Vulnerability? An application vulnerability is a system flaw or weakness in an application that could be exploited to compromise the security of the application. Once an attacker has found a flaw, or application vulnerability, and determined how to access it, the attacker has the potential to exploit the application vulnerability to facilitate a cyber crime. These crimes target the confidentiality, integrity, or availability (known as the "CIA triad") of resources possessed by an application, its creators, and its users. Attackers typically rely on specific tools or methods to perform application vulnerability discovery and compromise. According to Gartner Security, the application layer currently contains 90% of all vulnerabilities. Common Application Vulnerability Exploits While there are many different tools and techniques for exploiting application vulnerabilities, there are a handful that are much more common than others. These include: Cross Site Scripting SQL Injection LDAP Injection Cross Site Request Forgery Insecure Cryptographic Storage Application Vulnerability Management It is common for software and application developers to use vulnerability scanning software to detect and remedy application vulnerabilities in code, but this method is not entirely secure and can be costly and difficult to use. Furthermore, scanning software quickly becomes outdated and inaccurate, which only poses more issues for developers to address in trying to make their applications secure. Reducing Application Vulnerability Risk Veracode's cloud-based service and systematic approach deliver a simpler and more scalable solution for reducing global application-layer risk across web, mobile and third-party applications. Recognized as a Gartner Magic Quadrant Leader since 2010, Veracode provides on-demand application vulnerability testing to detect and offer solutions for vulnerabilities and other security issues. Since Veracode offers a service instead of a scanning tool, companies are able to save costs by having their applications tested at the highest level of accuracy without the need for purchasing and updating software or hiring specialists to operate and maintain the software.

QUESTION 144 What is the MAIN purpose of an architectural risk analysis? A. Determine whether the system has inherent design vulnerabilities B. Assuring that implementation defects have not been introduced C. Verifying that proper cryptographic key storage has been followed D. Validating that network layer communication is tamper resistant

Correct Answer: A Explanation/Reference: Architectural risk assessment is a risk management process that identifies flaws in a software architecture and determines risks to business information assets that result from those flaws. Through the process of architectural risk assessment, flaws are found that expose information assets to risk, risks are prioritized based on their impact to the business, mitigations for those risks are developed and implemented, and the software is reassessed to determine the efficacy of the mitigations.

QUESTION 129 Which of the following is the MOST important consideration when securely integrating multiple networked components for an Internet of Things (IoT) system? A. Formal analysis of secure communication paring protocols B. Formal analysis of IoT System-on-Chip (SOC) components C. Cryptographic features of Field Programmable Gate Array (FPGA) IoT chips D. Correct use of Transmission Control Protocol / Internet Protocol (TCP?IP) network stack open source library in IoT components

Correct Answer: A Explanation/Reference: Bluetooth Low Energy, also known as Bluetooth Smart or Bluetooth 4, is designed to be power efficient and has been popular for transporting data between smartphones and IoT devices, smart homes, medical equipment and physical access control devices. Jasek presented his findings last week at Black Hat USA where he also introduced a BLE proxy tool, dubbed GATTacker, for detecting the presence of and exploiting the vulnerability. GATTacker can "see" data transferred between a smartphone used as a controller and a BLE device. It can also either clone the controller or capture and manipulate data transferred between the two BLE devices when certain conditions are met.

QUESTION 197 Adding a unique token to each transaction request mitigates which of the following attacks? A. Cross-Site Request Forgery (CSRF) B. Man-in-the-Middle (MITM) C. SQL injection D. Command injection

Correct Answer: A Explanation/Reference: CSRF Specific Defense Once you have verified that the request appears to be a same origin request so far, we recommend a second check as an additional precaution to really make sure. This second check can involve custom defense mechanisms using CSRF specific tokens created and verified by your application or can rely on the presence of other HTTP headers depending on the level of rigor/security you want. There are numerous ways you can specifically defend against CSRF. We recommend using one of the following (in ADDITION to the check recommended above): 1. Synchronizer (i.e.,CSRF) Tokens (requires session state) Approaches that do require no server side state: 2. Double Cookie Defense 3. Encrypted Token Pattern 4. Custom Header - e.g., X-Requested-With: XMLHttpRequest These are listed in order of strength of defense. So use the strongest defense that makes sense in your situation. Synchronizer (CSRF) Tokens Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks Characteristics of a CSRF Token Unique per user session Large random value Generated by a cryptographically secure random number generator The CSRF token is added as a hidden field for forms or within the URL if the state changing operation occurs via a GET The server rejects the requested action if the CSRF token fails validation In order to facilitate a "transparent but visible" CSRF solution, developers are encouraged to adopt the Synchronizer Token Pattern (http://www.corej2eepatterns.com/Design/PresoDesign.htm). The synchronizer token pattern requires the generating of random "challenge" tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and links associated with sensitive server-side operations. When the user wishes to invoke these sensitive operations, the HTTP request should include this challenge token. It is then the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. This is analogous to the attacker being able to guess the target victim's session identifier. The following synopsis describes a general approach to incorporate challenge tokens within the request. When a Web application formulates a request (by generating a link or form that causes a request when submitted or clicked by the user), the application should include a hidden input parameter with a common name such as "CSRFToken". The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.

QUESTION 148 Systems Security Engineering Capability Maturity Model (SSE-CMM) standard is used for which of the following reasons? A. Standard mechanism for customers to evaluate a provider's security engineering capability B. Standard mechanism for software engineers to improve to security of the code they are developing C. Standard mechanism for customers to evaluate the productivity D. Standard mechanism for customers to evaluate the productivity of a provider's security engineers

Correct Answer: A Explanation/Reference: Capability Maturity Model Integration (CMMI) is a process level improvement training and appraisal program. Administered by the CMMI Institute, a subsidiary of ISACA, it was developed at Carnegie Mellon University (CMU). It is required by many United States Department of Defense (DoD) and U.S. Government contracts, especially in software development. CMU claims CMMI can be used to guide process improvement across a project, division, or an entire organization. CMMI defines the following maturity levels for processes: Initial, Managed, Defined, Quantitatively Managed, and Optimizing.

QUESTION 164 In balancing the costs of an enforceable and controllable code review process against the benefits of defect detection rate and defect density, which of the following represents the MOST effective code review methodology? A. Formal, multi-participant peer reviews tied to significant code changes B. Informal pass around reviews conducted in person or through desktop-sharing tools C. Over-the-Shoulder reviews conducted in person or through desktop-sharing tools D. Lightweight, informal reviews using purpose-built tools tied to the code check-in process

Correct Answer: A Explanation/Reference: Code review is systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in the initial development phase, improving the overall quality of software.

QUESTION 135 In which of the following ways does Comprehensive, Lightweight Application Security Process (CLASP) help a software development professional? A. By providing storybook templates and use cases for security infractions in applications B. By facilitating the integration of security-related activities into the existing application development process C. By providing assurance control points for process maturity levels as defined in Capability Maturity Model Integration (CMMI) D. By providing organizations with template policies used in software development projects

Correct Answer: A Explanation/Reference: Comprehensive, Lightweight Application Security Process—CLASP—is an activity-driven, role-based set of process components guided by formalized best practices. CLASP is designed to help software development teams build security into the early stages of existing and new-start software development life cycles in a structured, repeatable, and measurable way. CLASP is based on extensive field work by Secure Software employees in which the system resources of many development life cycles were decomposed to create a comprehensive set of security requirements. These resulting requirements form the basis of CLASP's Best Practices, which can enable organizations to systematically address vulnerabilities that, if exploited, can result in the failure of basic security services (e.g., confidentiality, authentication, and authorization). Overview This introduction to the CLASP Process provides an overview of CLASP's process structure and the dependencies among the CLASP process components, as follows: CLASP Views CLASP Best Practices 24 CLASP Activities CLASP Resources (including CLASP Keywords) Taxonomy of CLASP This introduction concludes with a section providing further information on CLASP. CLASP Views The CLASP Process is presented through five high-level perspectives called CLASP Views. These views allow CLASP users to quickly understand the CLASP Process, including how CLASP process components interact and how to apply them to a specific software development life cycle. These are the CLASP Views: Concepts View This view provides a high-level introduction to CLASP by briefly describing, for example, the interaction of the five CLASP Views, the seven CLASP Best Practices, the CLASP Taxonomy, the relation of CLASP to security policies, and a sample sequence for applying CLASP process components. Role-Based View This view contains role-based introductions to the CLASP Process. Activity-Assessment View This view helps project managers assess the appropriateness of the 24 CLASP Activities and select a subset of them. CLASP provides two sample road maps (i.e., legacy and new-start) to help select applicable activities. Activity-Implementation View This view contains the 24 security-related CLASP Activities that can be integrated into a software development process. The activities phase of the SDLC translates into executable software any subset of the 24 security- related activities assessed and accepted in Activity Assessment. Vulnerability View This view contains a catalog of the 104 underlying "problem types" identified by CLASP that form the basis of security vulnerabilities in application source code. CLASP divides the 104 problem types into 5 high-level categories. An individual problem type in itself is often not a security vulnerability; frequently, it is a combination of problems that create a security condition leading to a vulnerability in source code. Associated with the Vulnerability View are the CLASP Vulnerability Use Cases, which depict conditions under which security services are vulnerable to attack at the application layer. The use cases provide CLASP users with easy-to-understand, specific examples of the relationship between security-unaware source coding and possible resulting vulnerabilities in basic security services. See https://www.us-cert.gov/bsi/articles/best-practices/requirements-engineering/introduction-to-the-clasp- process

QUESTION 109 Which of the following MUST be done prior to creating security test cases that focus on privacy? A. Establish the system privacy requirements B. Create an Acceptable Use Policy (AUP) C. Enable privacy preferences for the system cookies D. Define the functional requirements

Correct Answer: A Explanation/Reference: Defining and integrating security and privacy requirements early helps make it easier to identify key milestones and deliverables and minimize disruptions to plans and schedules. Security and privacy analysis includes assigning security experts, defining minimum security and privacy criteria for an application, and deploying a security vulnerability/work item tracking system.

QUESTION 168 Which of the following is a defense against unintentional use of malicious open source libraries? A. Statically linking the executable B. Linking shared libraries in assembly code C. Manually optimizing the executable D. Linking the executable at run time

Correct Answer: A Explanation/Reference: Dynamic linking can reduce total resource consumption (if more than one process shares the same library (including the version in "the same", of course)). I believe this is the argument that drives it its presence in most environments. Here "resources" includes disk space, RAM, and cache space. Of course, if your dynamic linker is insufficiently flexible there is a risk of DLL hell. Dynamic linking means that bug fixes and upgrades to libraries propagate to improve your product without requiring you to ship anything. Plugins always call for dynamic linking. Static linking, means that you can know the code will run in very limited environments (early in the boot process, or in rescue mode). Static linking can make binaries easier to distribute to diverse user environments (at the cost of sending a large and more resource hungry program). Static linking may allow slightly faster startup times, but this depends to some degree on both the size and complexity of your program and on the details of the OSs loading strategy.

QUESTION 139 Using many different external components makes an application more vulnerable to which kind of attack? A. Dynamic Link Library (DLL) replacement B. Cross-Site request Forgery (CSRF) C. Structured Query Language (SQL) injection D. Cross- Site Scripting (XSS)

Correct Answer: A Explanation/Reference: In computer programming, DLL injection is a technique used for running code within the address space of another process by forcing it to load a dynamic-link library. DLL injection is often used by external programs to influence the behavior of another program in a way its authors did not anticipate or intend. For example, the injected code could hook system function calls, or read the contents of password textboxes, which cannot be done the usual way. A program used to inject arbitrary code into arbitrary processes is called a DLL injector.

QUESTION 188 A recent update to a web application has incorporated Rich Internet Application (RIA) functionality. Which of the following methods BEST mitigates overload of the web application server? A. Pool database connections B. Limit the number of components used C. Cluster web servers D. Develop a robust workflow support

Correct Answer: A Explanation/Reference: In software engineering, a connection pool is a cache of database connections maintained so that the connections can be reused when future requests to the database are required. Connection pools are used to enhance the performance of executing commands on a database.

QUESTION 177 Which of the following documents, when shared among suppliers, reduces the risk of introducing security defects as software transitions from one supplier to another? A. Data interchange format definitions B. Database schema definitions C. Patch management plans D. Incident response plans

Correct Answer: A Explanation/Reference: In software engineering, especially with CASE-tools and CASE-systems, conceptual models defined in one tool usually are not transferrable to another vendor's tool. This was seen as a substantial problem by EIA. In 1987 EIA started work on a CASE data interchange format (abbrev.: CDIF) which attempts to tackle the problem by defining so called meta-models for specific subject areas by means of an object oriented entity relationship modeling technique. See http://wi.wu-wien.ac.at/rgf/9606mobi.html

QUESTION 118 Which of the following is the KEY benefit of static analysis? A. Errors are found earlier in the development phase B. Errors are found during the application execution C. Errors are found earlier in the integration phase D. Errors are found after release to production

Correct Answer: A Explanation/Reference: Static analysis, also called static code analysis, is a method of computer program debugging that is done by examining the code without executing the program. The process provides an understanding of the code structure, and can help to ensure that the code adheres to industry standards.

QUESTION 171 The overall purpose of a Business Continuity Plan (BCP) in software operations is to mitigate the impact of which of the following? A. Downtime to an organization B. Significant security events C. Identifiable threats D. Known security vulnerabilities

Correct Answer: A Explanation/Reference: The business continuity planning (BCP) is the creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster.

QUESTION 186 A data abstraction layer that requires authorization tokens for every access to data can be said to do which of the following? A. Completely mediate data access B. Provide confidential data access C. Improve reliability of data access D. Increase auditability of data access

Correct Answer: A Explanation/Reference: The principle of complete mediation requires that all accesses to objects be checked to ensure they are allowed. Whenever a subject attempts to read an object, the operating system should mediate the action. First, it determines if the subject can read the object.

QUESTION 165 From a supply chain perspective, what is the MOST important consideration for an organization that has to decide to multisource some of its integrated business functions that contain sensitive data? A. Service Level Agreements (SLA) that align with the business requirements B. Performance metrics that focus on core business responses C. Personnel cost savings D. Retention of specialized skills and staff

Correct Answer: A Explanation/Reference: SLAs are legally enforceable. What are SLAs? Service Level Agreements (SLAs) may come in a variety of forms. These can range from internal agreements between departments or vendor to client agreements. Service Level Agreements for internal entities are also referred to as Service Level Requirements (SLRs). For this paper, SLA and SLR will maintain similar meanings and will be referenced by SLA. The SLA is an agreement between a perceived service provider and a perceived customer. The flavor of the agreement depends entirely on the parties involved. SLAs address the following fundamental questions: what is delivered; where is it delivered; when is it delivered? In this type of arrangement, the IS team decides what it believes are the most important elements of the program to monitor. If a risk analysis was conducted, the results or recommendations from these reports should be used. A properly deployed defense-in-depth strategy will aid in the IS team's decision on the SLAs to negotiate. The key is to assure the IT teams that the IS program knows exactly what it wants and how it can be fairly measured. Each organization will have different threat vectors and will need to allocate resources in different manners. There are some additional guidelines on the governance of SLAs. They should be reviewed regularly and changed as needed. These agreements must also be somewhat flexible for both parties without interfering with the integrity of the process. There must be some consideration for dependencies when implementing system related SLAs. One example may include a maintenance contract which requires vendor notification or assistance prior to any changes made on a particular server or application. Finally, while having SLAs begins the codifying process, it will require on-going management from the IS and IT teams. Critical Areas for SLAs The information security (IS) team must define the critical areas for the establishment of SLAs with the Information Technology team. This must focus upon those areas which the IT teams have the most profound effect and control to significantly impact the IS program. Additionally, the IS team needs to limit SLAs to the most critical items, otherwise the effect of these agreements is lessened. A risk analysis or a review of previous incidents may point toward the most critical components for a particular organization. The following topical areas should be prevalent in most instances:

QUESTION 155 Which of the following is the MOST important security artifact that an outsourced software development team can provide to establish the quality of their software? A. The Security Requirements Traceability Matrix (SRTM) B. Details on the use of open source libraries C. The results of source code reviews D. Time spent on fixing security vulnerabilities

Correct Answer: A Explanation/Reference: Security Requirements Traceability Matrix (SRTM) is a grid that supplies documentation and a straightforward presentation of the required elements for security of a system. It is vital to incorporate the best level of security in technical projects that require such. SRTM can be used for any type of project. Requirements and tests can be easily tracked in relationship to one another. SRTM assures accountability for all processes and completion of all work. An SRTM between security requirements and test activities have a grid, comparable to an Excel spreadsheet. This spreadsheet contains a column for these items: Requirement identification number Description of the requirement Source of the requirement Objective of the test Verification method for the test Each row indicates a new requirement. An SRTM provides a simple way to review and compare the different requirements and tests appropriated for a specific security project

QUESTION 133 How can a security analyst BEST provide application stakeholders with an attackers view of an application? A. Review application test scripts B. Develop application abuse cases C. Discuss each business requirement D. Review the finding of a penetration test

Correct Answer: B Explanation/Reference: Abuse case is a specification model for security requirements used in the software development industry. The term Abuse Case is an adaptation of use case. The term was introduced by John McDermott and Chris Fox in 1999, while working at Computer Science Department of the James Madison University. As defined by its authors, an abuse case is a type of complete interaction between a system and one or more actors, where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system. We cannot define completeness just in terms of coherent transactions between actors and the system. Instead, we must define abuse in terms of interactions that result in actual harm. A complete abuse case defines an interaction between an actor and the system that results in harm to a resource associated with one of the actors, one of the stakeholders, or the system itself. Their notation appears to be similar to Misuse cases, but there are differences reported by Chun Wei in Misuse Cases and Abuse Cases in Eliciting Security Requirements.

QUESTION 149 In case of an incident, what is the PRIMARY purpose of audit logs? A. Make the attacker accountable B. Reconstruct the sequence of events C. Recover data and application D. Discover security violation

Correct Answer: B Explanation/Reference: An audit trail (also called audit log) is a security-relevant chronological record, set of records, and/or destination and source of records that provide documentary evidence of the sequence of activities that have affected at any time a specific operation, procedure, or event. Audit records typically result from activities such as financial transactions, scientific research and health care data transactions, or communications by individual people, systems, accounts, or other entities. The process that creates an audit trail is typically required to always run in a privileged mode, so it can access and supervise all actions from all users; a normal user should not be allowed to stop/change it. Furthermore, for the same reason, trail file or database table with a trail should not be accessible to normal users. Another way of handling this issue is through the use of a role-based security model in the software. The software can operate with the closed-looped controls, or as a 'closed system', as required by many companies when using audit trail functionality.

QUESTION 108 What is the MAIN advantage to using declarative security over programmatic security within an application? A. Simplification of the business logic B. Flexibility in the application security design C. Ease of coding for application developers D. Separation of security controls for business logic

Correct Answer: B Explanation/Reference: Declarative vs. Programmatic Security. Security can be instantiated in two different ways in code: in the container itself or in the content of the container. Declarative programming is when programming specifies the what, but not the how, with respect to the tasks to be accomplished. An example is SQL, where the "what" is described and the SQL engine manages the "how." Thus, declarative security refers to defining security relations with respect to the container. Using a container-based approach to instantiating security creates a solution that is more flexible, with security rules that are configured as part of the deployment and not the code itself. Security is managed by the operational personnel, not the development team. Imperative programming, also called programmatic security, is the opposite case, where the security implementation is embedded into the code itself. This can enable a much greater granularity in the approach to security. This type of fine-grained security, under programmatic control, can be used to enforce complex business rules that would not be possible under an all-or-nothing container-based approach. This is an advantage for specific conditions, but it tends to make code less portable or reusable because of the specific business logic that is built into the program. The choice of declarative or imperative security functions, or even a mix of both, is a design-level decision. Once the system is designed with a particular methodology, then the secure development lifecycle (SDL) can build suitable protections based on the design. This is one of the elements that requires an early design decision, as many other elements are dependent upon it.

QUESTION 193 Which of the following is best practice to avoid software security misconfiguration of a deployed system? A. Employ system configuration defaults B. Employ system configuration management C. Execute alpha and beta testing to detect abuse cases D. Ensure only authorized users can modify the system configuration

Correct Answer: B Explanation/Reference: Employing system configuration management would encompass ensuring that only authorized users can modify the system configuration.

QUESTION 147 Which of the following is the MOST beneficial to system maintenance personnel for detecting unauthorized changed to application executable files made by a file-level rootkit? A. List of all executable, library, and data files used by the application B. List of cryptography generated hash values of all production code C. Effective change control processes for all release-to-production activities D. Recommendations for antivirus programs that do not interfere with the application

Correct Answer: B Explanation/Reference: File Integrity Management is based on hashes of key system files.

QUESTION 105 Which of the following is a technique that involves providing random data to the inputs of a program? A. Data sampling B. Fuzzing C. Input validation D. Obfuscation

Correct Answer: B Explanation/Reference: Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or data as inputs to a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from the parser but able to exercise interesting behaviors deeper in the program and "invalid enough" so that they might stress different corner cases and expose errors in the parser. For the purpose of security, input that crosses a trust boundary is often the most interesting. For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.

QUESTION 157 Which of the following BEST describes the purpose of a peer review? A. Refactoring code B. Questioning assumptions C. Improving productivity D. Removing redundancy

Correct Answer: B Explanation/Reference: In software development, peer review is a type of software review in which a work product (document, code, or other) is examined by its author and one or more colleagues, in order to evaluate its technical content and quality.

QUESTION 161 Which of the following is a potential risk of the software procurement process? A. Omitting patched vulnerabilities B. Focusing only on functional requirements C. Avoiding vendor lock-in D. Focusing only on data classification requirements

Correct Answer: B Explanation/Reference: In systems engineering and requirements engineering, a non-functional requirement (NFR) is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. They are contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non- functional requirements is detailed in the system architecture, because they are usually Architecturally Significant Requirements. Broadly, functional requirements define what a system is supposed to do and non-functional requirements define how a system is supposed to be. Functional requirements are usually in the form of "system shall do <requirement>", an individual action of part of the system, perhaps explicitly in the sense of a mathematical function, a black box description input, output, process and control functional model or IPO Model. In contrast, non-functional requirements are in the form of "system shall be <requirement>", an overall property of the system as a whole or of a particular aspect and not a specific function. The system's overall properties commonly mark the difference between whether the development project has succeeded or failed. Non-functional requirements are often called "quality attributes" of a system. Other terms for non-functional requirements are "qualities", "quality goals", "quality of service requirements", "constraints" and "non-behavioral requirements". Informally these are sometimes called the "ilities", from attributes like stability and portability. Qualities—that is non-functional requirements—can be divided into two main categories: Execution qualities, such as security and usability, which are observable at run time. Evolution qualities, such as testability, maintainability, extensibility and scalability, which are embodied in the static structure of the software system

QUESTION 96 If security exceptions are needed in an application's design, which of the following is important? A. They are tested in a target security environment B. They follow a policy that specifies how each will be handled C. They contain a message that will identify the exception D. They are approved by the project team

Correct Answer: B Explanation/Reference: Security Requirements Validation - From the functionality perspective, the validation of security requirements is the main objective of security testing. From the risk management perspective, the validation of security requirements is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-256, RSA, AES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption). From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing or validation. Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined. Considering the security test for a SQL injection vulnerability, for example, a black box test might first involve a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis until the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user). A use case is a methodology used in system analysis to identify, clarify, and organize system requirements. The use case is made up of a set of possible sequences of interactions between systems and users in a particular environment and related to a particular goal. It consists of a group of elements (for example, classes and interfaces) that can be used together in a way that will have an effect larger than the sum of the separate elements combined. The use case should contain all system activities that have significance to the users. A use case can be thought of as a collection of possible scenarios related to a particular goal, indeed, the use case and goal are sometimes considered to be synonymous.

QUESTION 110 Which of the following is the BEST method to protect the confidentiality of large amounts of data in transit? A. Asymmetric encryption B. Symmetric encryption C. Packet labeling D. One-way hash

Correct Answer: B Explanation/Reference: Symmetric encryption is used for bulk and speedy encryption

QUESTION 106 In Public Key Infrastructure (PKI), which of the following is an implicit requirement? A. Hiearchical trust topology established before a certificate can be revoked B. Certificate revoked by the same authority that has issued it C. Series of cryptographic signatures validated before a certificate is revoked D. Certificate revoked by a cross-certification grid

Correct Answer: B Explanation/Reference: The CA has two responsibilities. Issues certificates and maintains their validity by maintaining the CRL and/or OCSP.

QUESTION 159 The security principle of least privilege is also referred to as which of the following? A. Security role B. Need to know C. Access Control List (ACL) D. Modular development

Correct Answer: B Explanation/Reference: The term "need to know", when used by government and other organizations (particularly those related to the military or espionage), describes the restriction of data which is considered very sensitive. Under need-to-know restrictions, even if one has all the necessary official approvals (such as a security clearance) to access certain information, one would not be given access to such information, or read into a clandestine operation, unless one has a specific need to know; that is, access to the information must be necessary for one to conduct one's official duties. As with most security mechanisms, the aim is to make it difficult for unauthorized access to occur, without inconveniencing legitimate access. Need-to-know also aims to discourage "browsing" of sensitive material by limiting access to the smallest possible number of people. The Battle of Normandy in 1944 is an example of a need-to-know restriction. Though thousands of military personnel were involved in planning the invasion, only a small number of them knew the entire scope of the operation; the rest were only informed of data needed to complete a small part of the plan. The same is true of the Trinity project, the first test of a nuclear weapon in 1945. This term also includes anyone that the people with the knowledge deem necessary to share their knowledge with.

QUESTION 158 Threat modeling is used to analyze which of the following? A. Which Open System Interconnection (OSI) layer is at most risk B. Attacker's view of the system under threat C. Extent of damage by a potential threat D. Trust levels that can be established for different components of a system under threat

Correct Answer: B Explanation/Reference: Threat modeling is a design technique used to communicate information associated with a threat throughout the development team. The threat modeling effort begins at the start of the project and continues throughout the development effort. The purpose of threat modeling is to completely define and describe threats to the system under development. In addition, information as to how the threat will be mitigated can be recorded. Communicating this material among all members of the development team enables everyone to be on the same page with respect to understanding and responding to threats to the system. The threat modeling process is designed around the activities performed as part of the software development process. Beginning with examining how the data flows through the system provides insight into where threats can exist. Close attention can be paid to the point where data crosses trust boundaries. At each location, a series of threats is examined. Microsoft uses the mnemonic STRIDE to denote the types of threats.

QUESTION 167 Which of the following MUST be provided for the RIA to meet the functional requirements? A. Hypertext Markup Language (HTML) interface B. Internet Protocol (IP) connection C. Offline caching of IP traffic D. Extensible Markup Language (XML) interface

Correct Answer: B Explanation/Reference: A rich Internet application (RIA; sometimes called an Installable Internet Application) is a Web application that has many of the characteristics of desktop application software, typically delivered by way of a site-specific browser, a browser plug-in, an independent sandbox, extensive use of JavaScript, or a virtual machine. Adobe Flash, JavaFX, and Microsoft Silverlight are currently the three most common platforms. Google trends shows (as of September 2012) that frameworks based on a plug-in are in the process of being replaced by HTML5/JavaScript-based alternatives. Users generally need to install a software framework using the computer's operating system before launching the application, which typically downloads, updates, verifies and executes the RIA. This is the main differentiator from HTML5/JavaScript-based alternatives like Ajax that use built-in browser functionality to implement comparable interfaces. As can be seen on the List of rich Internet application frameworks which includes even server-side frameworks, while some consider such interfaces to be RIAs, some consider them competitors to RIAs; and others, including Gartner, treat them as similar but separate technologies. RIAs dominate in browser based gaming as well as applications that require access to video capture (with the notable exception of Gmail, which uses its own task-specific browser plug-in).Web standards such as HTML5 have progressed and the compliance of Web browsers with those standards has improved somewhat.

QUESTION 136 Which of the following is a PRIMARY component of information lifecycle management? A. Threat modeling B. Data classification C. Vulnerability assessment D. Developing requirements

Correct Answer: B Explanation/Reference: Information life cycle management (ILM) is a comprehensive approach to managing the flow of an information system's data and associated metadata from creation and initial storage to the time when it becomes obsolete and is deleted. Unlike earlier approaches to data storage management, ILM involves all aspects of dealing with data, starting with user practices, rather than just automating storage procedures, as for example, hierarchical storage management (HSM) does. Also in contrast to older systems, ILM enables more complex criteria for storage management than data age and frequency of access.

QUESTION 172 Software audits logs can only be correlated in which of the following conditions? A. The same version of the software is used on all systems B. All log entries are written following the same schema C. Time is synchronized on all logging systems D. Logs are digitally signed on all logging systems

Correct Answer: C Explanation/Reference: Record time offset. So what time is it? Well statistically speaking, you're probably in a different time zone than me. You're certainly watching this video at a different time than I recorded it. There's a difference there. And so when you're dealing with time, you have to be very precise and very specific about when you saw something, how you saw it, when a file was saved, when a file was accessed. One of the things we want to look at are the offsets of time. Now if this is the Windows operating system for instance, Windows uses a 64-bit time stamp. And it's a time stamp that counts the number of 100 nanosecond intervals that have occurred since January 1 of the year 1601 at 00:00:00 GMT. Obviously, this means that it's going to stop working in about- oh, I don't know- 58,000 years. So if anything, Microsoft was thinking ahead of the game here. You aren't going to run out of time any time in the next 58,000 years. Maybe by then, we'll have a different time stamp we can work with. In Unix, or Linux, we have a 32-bit time stamp to deal with. This recognizes the number of seconds that have occurred since January 1 of 1970 at zero GMT. Now notice that this is a 32-bit time stamp, which means it's going to stop working soon. We don't have that many numbers or seconds that we can deal with here. So it's going to stop working on Monday, December 2 of 2030.

QUESTION 141 Which threat is mitigated by encrypting communication between a Network Attached Storage (NAS) device and other devices? A. Malware B. Client-side injection C. Man-in-the-Middle (MITM) D. Denial of Service (DoS)

Correct Answer: C Explanation/Reference: To avoid internal man in the middle attacks you can set up an intrusion detection system (IDS). The IDS will basically monitor your network, and if someone tries to hijack traffic flow, it gives immediate alerts. However, the downside of IDS is that it may raise false attack alerts many a times. This leads to users disabling the IDS. Tools which use the advanced address resolution protocol (like XARP or ARPOn) and measures like implementing dynamic host configuration protocol (DHCP) snooping on switches can limit or prevent ARP spoofing. This in turn can help you prevent man in the middle attacks. Another solution for preventing man in the middle attacks is to use the virtual private network (VPN). The use of such encrypted tunnels creates additional secure layers when you access your company's confidential networks over links like Wi-Fi. Additionally, companies should have proper process auditing and monitoring in place so that they are aware of their staff activities.

QUESTION 154 When seeding cryptographically strong keys, which of the following is the MOST effective? A. Secure Hashing Algorithm (SHA) B. Message Digest 5 (MD5) C. Secure Random Number Generator (RNG) D. Advanced Encryption Standard (AES)

Correct Answer: C Explanation/Reference: A cryptographically secure pseudo-random number generator (CSPRNG) or cryptographic pseudo-random number generator (CPRNG) is a pseudo-random number generator (PRNG) with properties that make it suitable for use in cryptography. Many aspects of cryptography require random numbers, for example: key generation. nonces.

QUESTION 195 When should updates to application be implemented? A. During a low usage period B. After full regression testing C. During an approved change window D. After user acceptance testing

Correct Answer: C Explanation/Reference: A maintenance window is a defined period of time during which planned outages and changes to production (see definition below) services and systems may occur. The purpose of defining standard maintenance windows is to allow clients of the service to prepare for possible disruption or changes. Regression testing is a type of software testing which verifies that software which was previously developed and tested still performs the same way after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc.

QUESTION 112 Which of the following is a reason for creating a Security Requirements Traceability Matrix (SRTM)? A. It identifies a privileged code section B. It substantiates a case for resource allocation C. It provides verification and validation method D. It helps developers understand risk

Correct Answer: C Explanation/Reference: A security requirements traceability matrix (SRTM) is a grid that allows documentation and easy viewing of what is required for a system's security. SRTMs are necessary in technical projects that call for security to be included. Traceability matrixes in general can be used for any type of project, and allow requirements and tests to be easily traced back to one another. The matrix is a way to make sure that there is accountability for all processes and is an effective way for a user to ensure that all work is being completed.

QUESTION 123 Why is a trusted path used when performing an evaluating using the Common Criteria (CC) methodology? A. To allow encrypted access to the Internet B. To allow only one path when accessing the Internet C. To allow communication with necessary confidence with the system D. To allow Singe Sign-On (SSO) authentication with the system

Correct Answer: C Explanation/Reference: A trusted path or trusted channel is a mechanism that provides confidence that the user is communicating with what the user intended to communicate with, ensuring that attackers can't intercept or modify whatever information is being communicated. The term was initially introduced by Orange Book. As its security architecture concept, it can be implemented with any technical safeguards suitable for particular environment and risk profile.

QUESTION 146 Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and system logs have which of the following in common? A. They all raise alerts in response to suspect activity B. They all provide mitigation against unauthorized access C. They all provide an audit record of activity D. They all report unauthorized access to a system

Correct Answer: C Explanation/Reference: All perform auditing. Logs do not report, alert, or mitigate.

QUESTION 143 Implementation of extended validation (EV) certificates for a website is a method of which of the following? A. Ensuring that the site is always available B. Allowing only certain users to view the site C. Informing the user that the site is not being spoofed D. Informing the customer that the contents of the site have not been tampered with

Correct Answer: C Explanation/Reference: An Extended Validation Certificate (EV) is a certificate used for HTTPS websites and software that proves the legal entity controlling the website or software package. Obtaining an EV certificate requires verification of the requesting entity's identity by a certificate authority (CA).

QUESTION 176 What is the BEST method of prioritizing the remediation of security vulnerabilities? A. Present a complete list of vulnerabilities to business leadership to allow them to assign priority B. Quantify, compare, and prioritize the amount of risk presented by each vulnerability C. Use the Annualized Loss Expectancy (ALE) for each vulnerability to allocate resources D. Prioritize based upon the amount of time needed to develop, test, and deploy a patch

Correct Answer: C Explanation/Reference: Annualized rate of occurrence (ARO) is described as an estimated frequency of the threat occurring in one year. ARO is used to calculate ALE (annualized loss expectancy). ALE is calculated as follows: ALE = SLE x ARO.

QUESTION 178 Which of the following situations can undermine the security of an application, even though the security features are well documented? A. Administration of the security of an application, even though the security features are well documented? B. Application fails to start if there are unacceptable values in the configuration file C. Application relies on the administrator to change the default security configuration D. Default configuration produces verbose security event longs

Correct Answer: C Explanation/Reference: Applications should be secure by design, secure by default, and secure by deployment

QUESTION 132 Which of the following methodologies will BEST reduce the cost of implementing data protection? A. Data encryption B. Data handling C. Data classification D. Data anonymization

Correct Answer: C Explanation/Reference: Data should be protected based on its data classification. This aligns the cost of protection to the sensitivity of the data. A Definition of Data Classification Data classification is broadly defined as the process of organizing data by relevant categories so that it may be used and protected more efficiently. The classification process not only makes data easier to locate and retrieve - data classification is of particular importance when it comes to risk management, compliance, and data security. Data classification involves tagging data, which makes it easily searchable and trackable. It also eliminates multiple duplications of data, which can reduce storage and backup costs, as well as speed up the search process

QUESTION 152 The following pseudo code example represents what secure coding practice? If (user is in role) { // then allow this operation } else { // log that user was not allowed to invoke the operation } A. Hierarchical security B. Declarative security C. Programmatic security D. Functional security

Correct Answer: C Explanation/Reference: Declarative vs. Programmatic Security Security can be instantiated in two different ways in code: in the container itself or in the content of the container. Declarative programming is when programming specifies the what, but not the how, with respect to the tasks to be accomplished. An example is SQL, where the "what" is described and the SQL engine manages the "how." Thus, declarative security refers to defining security relations with respect to the container. Using a container-based approach to instantiating security creates a solution that is more flexible, with security rules that are configured as part of the deployment and not the code itself. Security is managed by the operational personnel, not the development team. Imperative programming, also called programmatic security, is the opposite case, where the security implementation is embedded into the code itself. This can enable a much greater granularity in the approach to security. This type of fine-grained security, under programmatic control, can be used to enforce complex business rules that would not be possible under an all-or-nothing container-based approach. This is an advantage for specific conditions, but it tends to make code less portable or reusable because of the specific business logic that is built into the program. The choice of declarative or imperative security functions, or even a mix of both, is a design-level decision. Once the system is designed with a particular methodology, then the secure development lifecycle (SDL) can build suitable protections based on the design. This is one of the elements that requires an early design decision, as many other elements are dependent upon it.

QUESTION 119 An organization wants to performs security testing on a program, but does not posses the source code. Which of the following is the BEST technique to determine the attack surface? A. Identify test cases using the requirements of the program B. Threat model to prioritize program components from highest risk to lowest risk C. Use system monitoring tools to list the files, network ports, and other system resources a program is using D. Search binary code for the use of system Application Programming Interfaces (API)

Correct Answer: C Explanation/Reference: Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time.

QUESTION 126 When analyzing requirements for a cloud deployed mobile travel application, which of the following data privacy concerns has the widest legal scope? A. Data retention B. Data ownership C. Data jurisdiction D. Electronic discovery

Correct Answer: C Explanation/Reference: European Union. There is an example of a safe harbor decision in reference to the EU Data Protection Directive. The Directive sets comparatively strict privacy protections for EU citizens. It prohibits European firms from transferring personal data to overseas jurisdictions with weaker privacy laws. Five years later, a decision created exceptions where foreign recipients of the data voluntarily agreed to meet EU standards under the International Safe Harbor Privacy Principles. In October 2015, following a court decision by the Court of Justice of the European Union, the safe harbor agreement between the EU and US was declared invalid on the grounds that the US was not supplying an equally adequate level of protection against surveillance for data being transferred there.

QUESTION 199 Running code in an isolated account with minimal rights and restricted access to system resources is an example of what practice? A. Input sanitization B. Canonicalization C. Sandboxing D. Whitelisting

Correct Answer: C Explanation/Reference: In computer security, a sandbox is a security mechanism for separating running programs. It is often used to execute untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine or operating system.

QUESTION 145 Which of the following is the BEST technique for verifying data before importing it into a database? A. Perform normalization and type checking B. Perform normalization and length checking C. Perform format validation and type checking D. Perform format validation and length checking

Correct Answer: C Explanation/Reference: Input validation should be applied on both syntactical and semantic level. Syntactic validation should enforce correct syntax of structured fields (e.g. SSN, date, currency symbol) while semantic validation should enforce correctness of their values in the specific business context (e.g. start date is before end date, price is within expected range). It is always recommended to prevent attacks as early as possible in the processing of the user's (attacker's) request. Input validation can be used to detect unauthorized input before it is processed by the application. Input validation can be implemented using any programming technique that allows effective enforcement of syntactic and semantic correctness, for example: Data type validators available natively in web application frameworks (such as Django Validators, Apache Commons Validators etc) Validation against JSON Schema and XML Schema (XSD) for input in these formats Type conversion (e.g. Integer.parseInt() in Java, int() in Python) with strict exception handling Minimum and maximum value range check for numerical parameters and dates, minimum and maximum length check for strings Array of allowed values for small sets of string parameters (e.g. days of week) Regular expressions for any other structured data covering the whole input string (^...$) and not using "any character" wildcard (such as "." or "\S")

QUESTION 151 Which of the following is the MAIN objective of a post-mortem in incident response? A. Limit legal liability B. Mitigate current vulnerability C. Prevent similar future errors D. Provide answers to all problem reports

Correct Answer: C Explanation/Reference: Lessons learned or lessons learnt are experiences distilled from a project that should be actively taken into account in future projects. There are several definitions of the concept. The one used by the National Aeronautics and Space Administration, European Space Agency and Japan Aerospace Exploration Agency sounds as follows: "A lesson learned is knowledge or understanding gained by experience. The experience may be positive, as in a successful test or mission, or negative, as in a mishap or failure...A lesson must be significant in that it has a real or assumed impact on operations; valid in that is factually and technically correct; and applicable in that it identifies a specific design, process, or decision that reduces or eliminates the potential for failures and mishaps, or reinforces a positive result." The Development Assistance Committee of the Organisation for Economic Co-operation and Development defines lessons learned as "Generalizations based on evaluation experiences with projects, programs, or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design, and implementation that affect performance, outcome, and impact." In the practice of the United Nations the concept has been made explicit in the name of their Working Group on Lessons Learned of the Peacebuilding Commission. In the military field, conducting a Lessons learned analysis requires a leader-led after-actions debriefing. These debriefings require the leader to extend the lessons- learned orientation of the standard after-action review. He uses the event reconstruction approach or has the individuals present their own roles and perceptions of the event, whichever best fits the situation and time available.

QUESTION 180 Which of the following MUST be carried out to allow management to formally accept a residual risk prior to production deployment of a new application? A. The residual risk must be eliminated from the technical risk register B. All efforts to resolve the risk must be exhausted C. Management must be fully informed of the risk D. The recovery plan must be created

Correct Answer: C Explanation/Reference: Management makes the final decision on risk acceptance. Risk Acceptance (optional process) Published under Risk Management Acceptance of residual risks that result from with Risk Treatment has to take place at the level of the executive management of the organization (see definitions in Risk Management Process). To this extent, Risk Acceptance concerns the communication of residual risks to the decision makers. Once accepted, residual risks are considered as risks that the management of the organization knowingly takes. The level and extent of accepted risks comprise one of the major parameters of the Risk Management process. In other words, the higher the accepted residual risks, the less the work involved in managing risks (and inversely). This does not mean, however, that once accepted the risks will not change in forthcoming repetitions of the Risk Management life-cycle. Within the recurring phases and activities of the Risk Management processes (and in particular Risk Treatment as well as Monitor and Review) the severity of these risks will be measured over time. In the event that new assertions are made or changing technical conditions identified, risks that have been accepted need to be reconsidered. Risk Acceptance is considered as being an optional process, positioned between Risk Treatment and Risk Communication (more information here). This process is seen as an optional one, because it can be covered by both Risk Treatment and Risk Communication processes. This can be achieved by communicating the outcome of Risk Treatment to the management of the organization. One reason for explicitly mentioning Risk Acceptance is the need to draw management's attention to this issue which would otherwise merely be a communicative activity.

QUESTION 142 When implementing cryptographic agility, it is important to ensure which of the following? A. Cryptography algorithm in use is efficient B. Memory is dynamically allocated for encryption operations C. Strong access control policy is in place for the configuration store D. Systems silently fall back to the cryptography algorithm previously used

Correct Answer: C Explanation/Reference: Part of securing an application involves ensuring that highly sensitive information is not stored in a readable or easily decodable format. Examples of sensitive information include user names, passwords, connection strings, and encryption keys. Storing sensitive information in a non-readable format improves the security of your application by making it difficult for an attacker to gain access to the sensitive information, even if an attacker gains access to the file, database, or other storage location. Cryptographic agility. The common failure in all of these cases is cryptographic agility. Agility can be described as three properties, in order of priority: Users can choose which cryptographic primitives (such as encryption algorithm, hash-function, key-exchange scheme etc.) are used in a given protocol Users can replace the implementation of an existing cryptographic primitive, with one of their choosing Users can set system-wide policy on choice of default algorithms It is much easier to point out failures of cryptographic agility than success stories. This is all the more surprising because it is a relatively well-defined problem, unlike software extensibility. Instead of having to worry about all possible ways that a piece of code may be called on to perform a function not envisioned by its authors, all of the problems above involve swapping out interchangeable components. Yet flawed protocol design, implementation laziness and sometimes plain bad luck have often conspired to frustrate that. Future blog posts will take up this question of why it has proved so difficult to get past beyond MD5 and SHA1, and how more subtle types of agility-failure continue to plague even modern, greenfield projects such as Bitcoin wallets.

QUESTION 160 A software security professional discovers a blind Structured Query Language (SQL) injection vulnerability and develops a programmatic approach for addressing the risk. Which of the following is being performed? A. A plan developed to assist the organization in preventing an attack B. A key activity for addressing database vulnerabilities C. A mitigation plan developed to address the discovered vulnerability D. Development of new code to detect the consequence of the attack

Correct Answer: C Explanation/Reference: Risk mitigation planning is the process of developing options and actions to enhance opportunities and reduce threats to project objectives. Risk mitigation implementation is the process of executing risk mitigation actions. Risk mitigation progress monitoring includes tracking identified risks, identifying new risks, and evaluating risk process effectiveness throughout the project. Risk management means risk containment and mitigation. First, you've got to identify and plan. Then be ready to act when a risk arises, drawing upon the experience and knowledge of the entire team to minimize the impact to the project. Risk management includes the following tasks: Identify risks and their triggers Classify and prioritize all risks Craft a plan that links each risk to a mitigation Monitor for risk triggers during the project Implement the mitigating action if any risk materializes Communicate risk status throughout project

QUESTION 130 Management users can log in with two-factor authentication to see certain parts of the records for the staff on their team, while Human Resources (HR) users can log in with their user names and passwords to see all parts of all staff records. The decision is made to require two-factor authentication for HR users and to limit the records they can see. In this situation, the organization is applying which of the following secure design principles? A. Preserving integrity B. Economy of mechanism C. Securing the weakest link D. Leveraging existing components

Correct Answer: C Explanation/Reference: Securing the Weakest Link: Insiders No longer is a hoodie-wearing malicious hacker the most obvious perpetrator of an inside cyber attack. Massive, high-profile security breaches dominate today's headlines and consumers are swamped with notifications from organizations entrusted with private and sensitive data. But, increasingly, I am convinced that security professionals and the majority of security vendors are too focused on the wrong things. To many, it seems like the hoodie-wearing malicious hacker is the obvious enemy. We imagine that he (or she) has been waiting for the perfect opportunity to launch that magical zero-day exploit s/he's been sitting on, just waiting for the perfect moment to strike. While this type of attack can happen, it isn't the most common form of an attack that results in a breach; nor is it the biggest risk to your organization. Let's look at what defines an "insider." An insider is any individual who has authorized access to corporate networks, systems or data. This may include employees, contractors, business partners, auditors or other personnel with a valid reason to access these systems. Since we are increasingly operating in a connected fashion, businesses are more susceptible to insider threats than ever before. The volume of critical data in organizations is exploding, causing more information to be available to more staff. While this can boost productivity, it comes with inherent risks that need to be considered and mitigated, lest that privileged access be used against the organization. Mitigating risk is all about identifying weak points in the security program. The weakest point in any security program is people; namely, the insider. Insider threats can be malicious; but more commonly, they are accidental. Insiders can have ill intent, they can also be manipulated or exploited, or they can simply make a mistake and email a spreadsheet full of client information to the wrong email address. They can lose laptops or mobile devices with confidential data, or misplace backup tapes. These types of incidents are real and happen every day. They can lead to disastrous results on par with any major, external cyberattack. Traditionally, these threats are overlooked by most businesses because they are more concerned with the unknown malicious actor than the known staff member or business partner. Organizations are sometimes reluctant to take the steps necessary to mitigate these threats and share important data through a trusted relationship. They put little to no emphasis on implementing security controls for insiders. Those of you who believe that you can count on employees as a line of defense in the organization, think again. A recent SailPoint Technologies survey found that 27 percent of U.S. office workers at large companies would sell their work password to an outsider for as little as $1001. Many years ago, (in a 2004 BBC News article) users were willing to trade passwords for chocolate bars. With employee engagement levels as low as 30 percent in some organizations, asking employees to be a part of the solution may be asking too much. Given the current insider situation, attackers need not resort to elaborate attack methods to achieve their objectives. A 2016 Balabit survey indicates that the top two attacker techniques are social engineering (e.g., phishing) and compromised accounts from weak passwords. There are a number of ways that insiders can cause damage. In some cases, they are coerced by an outsider to extract data. This is common when organized crime is involved. In other cases, legitimate user access is used to extract data, but the user's real credentials have been compromised and don't trigger security alerts focused on malware, compliance policies and account-brute-force attacks. The good news is that organizations can do more now than ever before. Providers are responding with solutions that monitor email traffic, Web usage, network traffic and behavior-based pattern recognition to help detect who in the organization is trustworthy and who may be a risk. If a staff accountant is in the process of exporting customer data at 3 a.m., this behavior is flagged as anomalous and alerts security staff to a potential compromise. The employee that starts logging in later, leaving earlier and sending fewer emails to his manager may be disengaged or even disgruntled; and worth keeping an eye on. Although this is a murky area, HR can be a security advocate, identifying employees with discipline issues whom could fit a risk profile. While this may be a little "big brother" sounding in nature, some organizations may find this to be an appropriate way to mitigate the risks that come from insiders. Organizations without big security budgets still have some old-school mitigations available to them such as employee awareness programs, employee background and reference checks, and exit interviews to gather information about attitudes toward the company and insights into working conditions. The clear lesson here is that organizations must look past the perimeter and know what is happening inside the network, in addition to what is happening outside. The most likely enemy won't fit the stereotype: beware that the threat could very well come from within.

QUESTION 125 Which of the following BEST captures secure software design requirements? A. Formal risk assessments B. System architecture C. Use/misuse cases D. Policy

Correct Answer: C Explanation/Reference: Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. Also see https:// www.synopsys.com/blogs/software-security/principles-secure-software-design/

QUESTION 107 The PRIMARY advantage is using virtualized environments for data with multiple classification is that they can be which of the following? A. Partitioned for resiliency B. Mirrored for redundancy C. Separated by sensitivity D. Separated by function

Correct Answer: C Explanation/Reference: Storage segmentation is used to separate data by sensitivity. Storage segmentation includes, vSANs, and containerization.

QUESTION 150 Minimizing the number of open sockets (Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) server to reduce which of the following? A. Smurf attacks B. Denial of Service (DoS) attacks C. The attack surface under threat D. Man-in-the-Middle (MITM) threats

Correct Answer: C Explanation/Reference: The attack surface of a software environment is the sum of the different points (the "attack vectors") where an unauthorized user (the "attacker") can try to enter data to or extract data from an environment. Keeping the attack surface as small as possible is a basic security measure. Examples of attack vectors include user input fields, protocols, interfaces, and services.

QUESTION 138 When is data obfuscation required? A. When backing up files that will be stored offsite B. When moving test data into production C. When production data must be copied into a test environment for testing D. When performing security testing on variable length string values

Correct Answer: C Explanation/Reference: The data masking technology pioneered by Camouflage enables enterprises to replace sensitive data, such as personally identifiable information, embedded in business processes with fully functioning fictional data that conforms to the logic embedded in the application. For example, if a postal code is replaced with fictional data, that data must conform to the look-up validation in the field and operate as expected. Masked data is useful for business analysis, development and test, cloud adoption and other purposes. It also helps minimize the number of places actual sensitive data is stored reducing overall exposure to cyber threats. Imperva Camouflage Data Masking not only protects against data theft, but also serves as a compliance solution for customers subject to regulations governing data privacy and the secure transfer of data such as the General Data Protection Regulation (GDPR), the Payment Card Industry (PCI) Data Security Standards and the Health Insurance Portability and Accountability Act (HIPAA).

QUESTION 79 System designers contribute which of the following to a backup and recovery strategy? A. Build environment and developer tool chain B. Data schema and capacity estimates C. Data flow diagrams D. Requirements for availability

Correct Answer: D Explanation/Reference: 10 Components of a Successful Backup and Recovery Strategy - Backing up your organizations data sounds like a fairly simple thing to do. However, it's often not until something can't be retrieved that it's discovered that the backup strategy was badly designed to the organizations goals, or has been poorly implemented. With organizations increasingly relying on their data and IT systems to operate on and service their clients, this is a level of risk that should be reviewed and minimized. The ten components below should be kept in mind and approached proactively to ensure that you can restore that file you need when you need it. Classify your data - When considering a backup strategy, it's useful to remember that not all data has the same value to you or your company. Losing the company picnic photos or an employee's music collection versus a database powering your main Oracle ERP system are completely different things and would have equally different impacts on your business, and clients, if the data was lost. To have the most efficient back-up, classify the data into different groups, and treat them differently from a backup standpoint. One classification may even mean that the data is not backed up at all. Understand your data - Once your data has been classified, it is time to establish a recovery point objective (RPO) and recovery time objective (RTO) for each data class. This will determine the frequency of which backups are conducted along with the extent and method of backup required. The answers to other questions such as what level of security needs to protect the data, how often a restore is likely to be conducted and how long the data needs to be retained for, will also impact the selection of the solution to be put in place. Don't forget about mobile devices - with more and more enterprise users utilizing mobile devices as their main device for conducting business, protecting the data held on those devices- from contacts and emails through to spreadsheets, documents, photos and personalized device settings- becomes more and more critical to business operations. No longer does the loss or failure of a device simply mean that only a couple of phone numbers have to be reentered into the new phone; critical data can be lost! Choose the backup strategy and method - different data sets may require different solutions in order to optimally meet the goals of the business and its data sets. Tiered recovery known as Backup Lifecycle Management or BLM is the most cost effective approach to storing data today. In most companies, more than 50% of data is older, of less value, and should cost less to protect. By setting up the correct strategy, you can align the age of your data to the cost of protecting it. Assign a responsible party for ensuring successful backups - just because a backup strategy has been implemented, doesn't mean it's going to run successfully every time. Contentions, lack of storage media and timing issues can occur and need to be dealt with in a timely fashion to make sure the data is protected. To ensure this happens it's recommended that responsibility is clearly assigned with making this happen. Secure your data - The backup data whether that's held on tape, disk or in the cloud offsite, should be protected both physically and logically from those who do not need access to it. Even for those that are actively managing the backup process, it is often not necessary for them to work with the data in its raw form, but rather they can manage the process with the data encrypted, adding another level of security and privacy. Although the best practice is to encrypt the data while it is in flight and at rest on the backup media, it is often not put in place due to the extra time required to encrypt the data during the backup process and as such increasing the backup window required. Ideally, look for 256 bit AES encryption. Conduct test restores - it's always more preferable to find an issue proactively than reactively when there is no hope of restoration of data or time constraints are in place. So conduct periodic test restores of data to ensure that the process is working as planned. Keep track of your backups and document - documentation is key for control of the process and security around your data. Ensure that the backup process, methods, goals and ongoing operational status are documented both for internal purposes as well as to comply with any 3rd party audit requirements such as FDIC, HIPAA or PCI. Destroy backup media appropriately - whether you are handling HIPAA, Credit Card or "regular' corporate data, it is a best practice to ensure that you are sufficiently keeping track of the backup media through it's useful life all the way through to physical destruction. As common practice and to minimize any data leakage, this life cycle should be documented both from a process perspective and an actual traceable activity standpoint. Things change, review your strategy and implementation on a regular basis - if there is one thing in business and technology that holds true, it is that change is constant. With this in mind, a regular review of what is being backed up (or not) and verification that the business requirements around the data are consistent should be conducted. In a world where the volume of data to be backed up is increasing substantially year over year and the complexity of the IT systems is not decreasing, proactive planning, management and execution is becoming ever more important. I hope these ten components to having a successful backup and recovery implementation helps guide you towards tackling that challenge head on.

QUESTION 182 Which of the following is the PRIMARY security benefit of reusing the code of a Commercial Off-The-Shelf (COTS) product? A. Compliance can consistently be achieved B. Ongoing maintenance will be less expensive over time C. Development cycle will be shorted than if developed in-house D. Code has greater exposure to testing, and its defects have been addressed

Correct Answer: D Explanation/Reference: COTS, MOTS, GOTS, and NOTS are abbreviations that describe pre-packaged software or (less commonly) hardware purchase alternatives. A COTS (commercial off-the-shelf) product is one that is used "as-is." COTS products are designed to be easily installed and to interoperate with existing system components. Almost all software bought by the average computer user fits into the COTS category: operating systems, office product suites, word processing, and e- mail programs are among the myriad examples. One of the major advantages of COTS software, which is mass-produced, is its relatively low cost. A MOTS (either modified or modifiable off-the-shelf, or military off-the-shelf, depending on the context) product is typically a COTS product whose source code can be modified. The product may be customized by the purchaser, by the vendor, or by another party to meet the requirements of the customer. In the military context, MOTS refers to an off-the-shelf product that is developed or customized by a commercial vendor to respond to specific military requirements. Because a MOTS product is adapted for a specific purpose, it can be purchased and used immediately. However, since MOTS software specifications are written by external sources, government agencies are sometimes leery of these products, because they fear that future changes to the product will not be in their control. A GOTS (government off-the-shelf) product is typically developed by the technical staff of the government agency for which it is created. It is sometimes developed by an external entity, but with funding and specification from the agency. Because agencies can directly control all aspects of GOTS products, these are generally preferred for government purposes. A NOTS (NATO off-the-shelf or niche off-the-shelf, depending on the context) product is developed by NC3A (for NATO Consultation, Command, and Control) to meet specific requirements for NATO. In the more general context, niche off-the-shelf refers to vendor-developed software that is for a specialized and narrow market segment, in comparison to the broad market for COTS products.

QUESTION 156 Which type of attack does a hash help mitigate? A. Elevation of Privilege (EoP) B. Information disclosure C. Spoofing D. Tampering

Correct Answer: D Explanation/Reference: Detecting and preventing file tampering With each new legal case, office personnel must sort through a wide array of electronic evidence, including discovery from external sources including opposing counsel. These documents are shared with the courts, experts and several other parties. How can you ensure that everyone is working with the exact same set of facts? How can you determine if any of these files were altered prior to arriving in your care? Electronic files are typically shared by disc or download link using the honor system. Once these files have been dispersed, anyone can modify the contents to fit their narrative, and then distribute that version as an official exhibit. Such changes can be difficult or impossible to trace to their source due to the number of people with access to the files. To prevent this, you can identify each file by its unique Hash value and then use that identifier to ensure file integrity once file sharing has begun. A Hash value is an electronic fingerprint constructed solely from the file's contents and structure. The most common Hash is the 5th generation of the Message Digest algorithm, commonly known as MD5. There are dozens of free programs to calculate MD5 values and, regardless of the program used, the resulting MD5 value will always match for exact copies of the same file. Two easy-to-use free MD5 programs are Digestit and Checksum. They support drag-n-drop ease of use and require les time than reading this paragraph. Other offerings, like Microsoft's™ File Checksum Integrity Verifier are also free, but can be cumbersome to use.

QUESTION 120 When building fault injection tests, random injections is chosen over targeted attacks when A. time is constrained B. system resources are contained C. rainbow tables are available D. there is access to a fuzzer

Correct Answer: D Explanation/Reference: Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from the parser but able to exercise interesting behaviors deeper in the program and "invalid enough" so that they might stress different corner cases and expose errors in the parser. For the purpose of security, input that crosses a trust boundary is often the most interesting. For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.

QUESTION 192 An application that will be used to gather customer information is in the requirements phase. The Ability of the application to do which of the following is the MOST important consideration for the requirements? A. Give or withhold consent when its private data is stored or used B. Archive data based upon organizational policy C. Share information with other applications D. Comply with applicable laws and organizational policy

Correct Answer: D Explanation/Reference: Note: The information below only applies to acts and practices that occured prior to 12 March 2014. The NPPs were replaced by the Australian Privacy Principles (APPs) on 12 March 2014. More information on the APPs can be found on the APPs page. The ten National Privacy Principles (NPPs) contained in schedule 3 of the Privacy Act 1988 (Privacy Act) regulate how large businesses, all health service providers and some small businesses and non-government organisations handle individuals' personal information. The NPPs cover the collection, use, disclosure and storage of personal information. They also allow individuals to access that information and have it corrected if it is wrong. The NPPs generally apply to private sector organisations with an annual turnover of $3 million or more. In addition, in some instances the NPPs will apply to private sector organisations with an annual turnover of less than $3 million. More information is available on the Privacy Topics — Business page. Below is a summary of the NPPs. For more detail, see the full text of the NPPs. Additional information and guidance on the interpretation of the NPPs can be found in the Guidelines to the National Privacy Principles. NPP 1: collection Describes what an organisation should do when collecting personal information, including what they can collect, collecting from third parties and, generally, what they should tell individuals about the collection. NPP 2: use and disclosure Outlines how organisations may use and disclose individuals' personal information. If certain conditions are met, an organisation does not always need an individual's consent to use and disclose personal information. There are also rules about direct marketing. NPPs 3-4: information quality and security An organisation must take steps to ensure the personal information it holds is accurate and up-to-date, and is kept secure from unauthorised use or access. NPP 5: openness An organisation must have a policy on how it manages personal information, and make it available to anyone who asks for it. NPP 6: access and correction Gives individuals a general right of access to their personal information, and the right to have that information corrected if it is inaccurate, incomplete or out-of-date. NPP 7: identifiers Generally prevents an organisation from adopting an Australian Government identifier for an individual (eg Medicare numbers) as its own. NPP 8: anonymity Where possible, organisations must give individuals the opportunity to do business with them without the individual having to identify themselves. NPP 9: transborder data flows Outlines how organisations should protect personal information that they transfer outside Australia. NPP 10: sensitive information Sensitive information includes information relating to health, racial or ethnic background, or criminal records. Higher standards apply to the handling of sensitive information

QUESTION 166 Refer to the information below to answer the question: A software developer works for an organization that is planning to change it environment to accommodate the following: Service Oriented Architecture (SOA) Use of Rich Internet Application (RIA). Pervasive computing for all employees in a heterogeneous environment. The developer is working directly with the Project Manager (PM) and the security officer to accomplish these goals in a functional and secure manner. Which of the following is the PRIMARY security issue with changing to a pervasive computing environment? A. Use of multiple database architectures B. Use of multiple application languages C. Greater number of wireless network hops D. Exposure of mobile devices

Correct Answer: D Explanation/Reference: Pervasive computing, also called ubiquitous computing, is the growing trend of embedding computational capability (generally in the form of microprocessors) into everyday objects to make them effectively communicate and perform useful tasks in a way that minimizes the end user's need to interact with computers as computers. Pervasive computing devices are network-connected and constantly available. Unlike desktop computing, pervasive computing can occur with any device, at any time, in any place and in any data format across any network, and can hand tasks from one computer to another as, for example, a user moves from his car to his office. Thus, pervasive computing devices have evolved to include not only laptops, notebooks and smartphones, but also tablets, wearable devices, fleet management and pipeline components, lighting systems, appliances and sensors, and so on. The goal of pervasive computing is to make devices "smart," thus creating a sensor network capable of collecting, processing and sending data, and, ultimately, communicating as a means to adapt to the data's context and activity; in essence, a network that can understand its surroundings and improve the human experience and quality of life. Often considered the successor to mobile computing, ubiquitous computing and, subsequently, pervasive computing, generally involve wireless communication and networking technologies, mobile devices, embedded systems, wearable computers, RFID tags, middleware and software agents. Internet capabilities, voice recognition and artificial intelligence are often also included. Pervasive computing applications can cover energy, military, safety, consumer, healthcare, production and logistics. An example of pervasive computing is an Apple Watch informing a user of a phone call and allowing him to complete the call through the watch. Or, when a registered user for Amazon's streaming music service asks her Echo device to play a song, and the song is played without any other user intervention.

QUESTION 113 Designers of a new database application decide to use the Operating System (OS) login credentials for database access, rather than establishing an entirely separate database system login. This is an example of which of the following security design principles? A. Open design B. Economy of mechanism C. Least common mechanism D. Leveraging existing components

Correct Answer: D Explanation/Reference: Security Design Principles Leveraging Existing Components The security principle of leveraging existing components requires that when introducing new system into the environment, you should review the state and settings of the existing security controls in the environment and ensure that they are being used by the new system to be deployed. Least Privilege The principle of least privilege means that an individual or a process should be given the minimum level of privileges to access information resources in order to perform a task. This will reduce the chance of unauthorized access to information. For example, if a user only needs to read a file, then he should not be given the permission to modify the file. Separation of Duties The principle of separation of duties means that when possible, you should require more than one person to complete a critical task. The primary objective of separation of duty is the prevention of fraud and errors. This objective is achieved by distributing the tasks and associated privileges to perform the tasks among multiple individuals. An example of separation of duties is the requirement of two signatures on a cheque. Defense in Depth The principle of defense in depth means that where one security control would be reasonable, more security controls that mitigate risks in different ways are better. For example, the administrative interface to your web site can be protected by login credentials (authentication). It can be further protected by denying direct access from public Internet (only accessible from internal network). To make the administrative interface even more secure, you can enable audit logging of all user logins, logouts, and all important user activities. Fail Safe The principle of fail safe means that if a system fails, it should fail to a state where the security of the system and its data are not compromised. For example, the following pseudo-code is not designed to be fail-safe, because if the code fails at "codeWhichMayFail", the user will be assumed to be an admin by default. This is obviously a security risk: isAdmin = true; try { codeWhichMayFail(); isAdmin = isUserInRole( "Administrator" ); } catch (Exception ex) { log.write(ex.toString()); } The fail-safe design would be something like below, where if the code fails at "codeWhichMayFail", the user will NOT be assumed to be an admin by default and hence gain access to resources only admin can access: isAdmin = false; try { codeWhichMayFail(); isAdmin = isUserInRole( "Administrator" ); } catch (Exception ex) { log.write(ex.toString()); } Economy of Mechanism Economy of mechanism means that information system designers should keep the design as simple and small as possible. This well-known principle applies to any aspect of a system. The rationale behind this principle is that it is relatively easy to spot functional defects as well as security holes in simple designs and simple systems. However, it is usually very hard to identify problems in complex designs and complex systems. Complete Mediation Complete mediation means that every access to every data object must be checked for identification, authentication, and authorization. This principle forces a system-wide central point of access control. This security principle requires complete access control of every request whether the information system is undergoing initialization, recovery, shutdown, or maintenance. Open Design The security principle of open design means that security designs that are open to scrutiny and evaluation by the public security community at large are in general more secure than obscure security designs that are proprietary and little known to the public. The rationale behind this principle is that weaknesses in the open designs will have a better chance of been caught and corrected. Least Common Mechanism The security principle of least common mechanism means that you should try to minimize the mechanisms shared by multiple subjects to gain access to a data resource. For example, serving an application on the Internet allows both attackers and users to share the Internet to gain access to the application. If the attackers launch a DDOS attack and over-load the application, the legitimate users will be unable to access the application. Another example: serving the same login page for your employees, customers, and partners to login to your company portal. The login page thus must be designed to the satisfaction of every user, which is a job presumably harder than having to satisfy only one or a few users. A better design based on the "least common mechanism" principle would be to implement different login pages for different types of users. Psychological Acceptability Psychological acceptability refers to the ease of use of the user interface of any security control mechanism such as authentication, password reset, password complexity, etc. The more user friendly the interface is, the less likely the user will make a mistake in using the security control and expose the system to security breaches. Weakest Link The weakest link principle requires that in designing security for a system, focus should be put on the weakest components of the overall system, because just as the old saying goes, "a chain is only as strong as its weakest link".

QUESTION 174 Which of the following problems can be solved using a security design pattern? A. Ensuring adequate protection at the lowest cost possible B. Reliably reproducing a security design flaw C. Increasing efficiency of security design evaluation D. Mitigating a frequently occurring set of threats

Correct Answer: D Explanation/Reference: Security patterns can be applied to achieve goals in the area of security. All of the classical design patterns have different instantiations to fulfill some information security goal: such as confidentiality, integrity, and availability. Additionally, one can create a new design pattern to specifically achieve some security goal. See https://en.wikipedia.org/wiki/Security_pattern

QUESTION 162 Which type of analysis can BEST identify vulnerabilities in code in an application? A. Statistical B. Complexity C. Risk D. Static

Correct Answer: D Explanation/Reference: Static Code Analysis (also known as Source Code Analysis) is usually performed as part of a Code Review (also known as white-box testing) and is carried out at the Implementation phase of a Security Development Lifecycle (SDL).

QUESTION 137 Which of the following does software version control mitigate during software development phase? A. Writing software using libraries with known vulnerabilities B. Introducing new vulnerabilities into the software C. Developing code with an old version of the computer D. Combining multiple source code versions

Correct Answer: D Explanation/Reference: Version control is the ability to manage the change and configuration of an application. Versioning is a priceless process, especially when you have multiple developers working on a single application, because it allows them to easily share files. Without version control, developers will eventually step on each other's toes and overwrite code changes that someone else may have completed without even realizing it. Using these systems allows you to check files out for modifications, then, during check-in, if the files have been changed by another user, you will be alerted and allowed to merge them. Version control systems allow you to compare files, identify differences, and merge the changes if needed prior to committing any code. Versioning is also a great way to keep track of application builds by being able to identify which version is currently in development, QA, and production. Also, when new developers join the team, they can easily download the current version of the application to their local environment using the version control system and are able to keep track of the version they're currently running. During development, you can also have entirely independent code versions if you prefer to keep different development efforts separate. When ready, you can merge the files to create a final working version. Another great use for versioning is when troubleshooting an issue, you are able to easily compare different versions of files to track differences. You can compare the last working file with the faulty file, decreasing the time spent identifying the cause of an issue. If the user decides to roll back the changes, you can implement the last working file by using the correct version.

QUESTION 121 Which of the following security test methods is BEST used to identify undocumented code in an application library? A. Penetration testing B. Reverse engineering C. Threat modeling D. White-box testing

Correct Answer: D Explanation/Reference: White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of testing software that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases.

QUESTION 163 Test cases should be traceable back to what artifact? A. Unit test case B. Scope development C. Deployment document D. Requirements document

Correct Answer: D Explanation/Reference: With a solid requirements document created it's fairly easy to create a test case document. Just get a soft copy of the requirements doc and modify it. The modification a requirements doc being converted to a test case needs depends in part on the format of the test case document being used. Starting with the requirements document (see Writing Requirements Documents), the examples shown below walk you through the process of converting a requirements document to a set of test cases.

QUESTION 42 Delayed signing allows which of the following? A. Developers can test code without access to the signing key B. Developers can refresh signing keys C. System administrators can manage singing keys D. System administrators can deploy application without checking signatures

Correct Answer: A Explanation/Reference: Delay signing is a process of generating partial signature during development with access only to the public key. The private key can be stored securely and used to apply the final strong name signature just before shipping the project. Delay signing as the name suggest delaying in signing in order to make written code secure even from the internal developer during development so that complete secure and reliable code goes on production with signature.

QUESTION 65 Which of the following BEST ensures integrity of a file being sent over the Internet? A. Use Transport Layer Security (TLS) to encrypt the file during transmission B. Create a file checksum and then encrypt the combination of file and checksum before transmission C. Encrypt the file before transmission and provide the encryption key to downloaders via an alternate channel D. Publish the cryptographic hash of the file on a secure site and inform downloaders of where the hash is stored

Correct Answer: D Explanation/Reference: A hash gives you integrity.

QUESTION 91 What parameters determines the MINIMUM extent of a fuzzing test? A. Data structures used in the source code B. Language that the program is written in C. System resources used by the program D. Number and type of the input sources

Correct Answer: D Explanation/Reference: All input sources should be fuzz tested.

QUESTION 30 Which of the following is the BEST method to anonymize text data for system development, maintenance, and testing purposes? A. Redaction B. Nulification C. Number variance D. Substitution

Correct Answer: D Explanation/Reference: Comparing enterprise data anonymization techniques There comes a time when data needs to be shared -- whether to evaluate a matter for research purposes, to test the functionality of a new application, or for an infinite number of other business purposes. To protect sensitivity or confidentiality of shared data, it often needs to be sanitized before it can be distributed and analyzed. There are a number of data anonymization techniques that can be used, including data encryption, substitution, shuffling, number and date variance, and nulling out specific fields or data sets. A popular and effective method for sanitizing data is called data anonymization. Also known as data masking, data cleansing, data obfuscation or data scrambling, data anonymization is the process of replacing the contents of identifiable fields (such as IP addresses, usernames, Social Security numbers and zip codes) in a database so records cannot be associated with a specific individual, project or company. Unlike the concept of confidentiality, which often means the subjects' identities are known but will be protected by the person evaluating the data, in anonymization, the evaluator does not know the subjects' identities. Thus, the anonymization process allows for the dissemination of detailed data, which permits usage by various entities while providing some level of privacy for sensitive information. Data anonymization techniques There are a number of data anonymization techniques that can be used, including data encryption, substitution, shuffling, number and date variance, and nulling out specific fields or data sets. Data encryption is an anonymization technique that replaces sensitive data with encrypted data. The process provides effective data confidentiality, but also transforms data into an unreadable format. For example, once data encryption is applied to the fields containing usernames, "JohnDoe" may become "@Gek1ds%#$". Data encryption is suitable from an anonymization perspective, but it's often not as suitable for practical use. Other business requirements such as data input validation or application testing may require a specific data type -- such as numbers, cost, dates or salary -- and when the encrypted data is put to use, it may appear to be the wrong data type to the system trying to use it. Substitution consists of replacing the contents of a database column with data from a predefined list of factious but similar data types so it cannot be traced to the original subject. Shuffling is similar to substitution, except the anonymized data is derived from the column itself. Both methods have their pros and cons, depending on the size of the database in use. For example, in the substitution process, the integrity of the information remains intact (unlike the information resulting from the encryption process). But substitution can pose a challenge if the records consist of a million usernames that require substitution. An effective substitution requires a list that is equal to or longer than the amount of data that requires substitution. In the shuffling process, the integrity of the data also remains intact and is easy to obtain, since data is derived from the existing column itself. But shuffling can be an issue if the number of records is small. Number and date variance are useful data anonymization techniques for numeric and date columns. The algorithm involves modifying each value in a column by some random percentage of its real value to significantly alter the data to an untraceable point. More uses for data anonymization Anonymizing data to comply with regulations Testing new applications with anonymized data Nulling out consists of simply removing sensitive data by deleting it from the shared data set. While this is a simple technique, it may not be suitable if an evaluation needs to be performed on the data or the fictitious form of the data. For example, it would be difficult to query customer accounts if vital information such as customer name, address and other contact details are null values. Data anonymization tools I have often used anonymization when working with various IT vendors for troubleshooting purposes. Data generated from log servers, for example, cannot be distributed in its original format, so instead traceable information is anonymized using log management software. By initiating the anonymize function in the software, I am able to protect data in our logs, replacing identifying data such as usernames, IP addresses, domain names, etc. with fictional values that maintain the same word length and data type. For example, a variable originally defined as "[email protected]" will get converted into "[email protected]". This allows us to share log data with our vendors without revealing confidential or personal information from our network. Some interesting tools in the data anonymization space are Anonymous Data 1.11 by Urban Software and Anonimatron, which is available on SourceForge.net. Both tools are freeware and can run on a Windows-based platform, while Anonimatron can also operate on Linux and Apple OSX systems In addition, I have worked with many IT security professionals who prefer to create custom scripts against files to anonymize their data. Whatever your choice for data anonymization, the goal remains the same: to anonymize sensitive information. Although these anonymization techniques and tools do not fully guarantee anonymity in all situations, they provide an effective process to protect personal information and assist in preserving privacy. With the growing need to share data for research purposes and the legal implications involved if due diligence is not properly conducted when releasing information, many organizations are now discovering the necessity and the benefits of data anonymization.

QUESTION 5 Which of the following is MOST likely to require controls over both the location of data and the location of the user? A. Confidentiality B. Availability C. Integrity D. Privacy

Correct Answer: D Explanation/Reference: The Location Data Privacy, Assessment and Guidelines (hereinafter Guidelines) were developed for those on the front lines of location data product and services development, as well as those who hold corporate, legal or fiduciary responsibilities. They bring attention to issues that many organizations and companies have chosen to ignore, due to lack of legal certainty around requirements, and provides a framework of location data practices for developers, managers, marketers, and executives. Location-based services and applications have become more than a technology or feature; they are an integral part of our lives. People define themselves not just by who they are, but where they are. Location data is now everywhere, easily accessible, and collected at an unprecedented scale. In the Information Economy we live in, personal data and similar forms of information are the new currencies. Location data is the universal link between all data, because everything and everyone is somewhere. For businesses, location information can transform virtually every facet of an enterprise from operations to sales and marketing, to customer care and even product development - all with a goal of having a positive impact on the bottom line. It is therefore rapidly becoming the newest "information weapon" used by CIOs, CMOs, COOs and digital strategists to gain a competitive advantage. The problem with location data today is that it changes as it weaves through various hands—applications, vendors, developers, government, companies, data providers, and individual users. Another complication is the diversity of legal protections across countries and states that make developing a consistent privacy policy a moving target. All this is set against a business atmosphere of continuous pressure to develop innovative location-based products and services. The power, benefits, and risks associated with location data are in its capacity to infer more personally identifiable information than the face value of the original information. While consumers and businesses are deriving great value from location-based services, targeted advertising and other applications, significant questions persist around location data privacy. In particular, how is location data being shared and who has access to it?

QUESTION 74 Non-functional requirements BEST describe which of the following? A. Quality attributes or a constraint that a system must satisfy B. An automatic sequence of steps that a system must satisfy C. Computational task requirements that the system must satisfy D. A non-documented method to cause the system to perform work

Correct Answer: A Explanation/Reference: Nonfunctional Requirements (NFRs, or system qualities) describe system attributes such as security, reliability, maintainability, scalability, and usability (often referred to as the "ilities")

QUESTION 13 Which of the following is the GREATEST concern associated with the use of shared credentials? A. Reliability B. Accountability C. Availability D. Recoverability

Correct Answer: B Explanation/Reference: Shared credentials are the enemy of individual accountability

QUESTION 27 The STRONGEST authentication process is achieved by using which of the following? A. Mixed case, alphanumeric password of 15 characters B. Biometric hand geometry scanner with a blood flow sensor C. Password of 10 characters in combination with a magnetic card D. Password in combination with a secret question and answer

Correct Answer: C Explanation/Reference: Multifactor authentication is stronger than any single-factor authentication.

QUESTION 33 Which of the following is an example of a secure software acceptance consideration? A. Digital watermarking B. Threat modeling C. Security milestones D. Operational security

Correct Answer: C Explanation/Reference: Security milestones should accompany functional milestones.

QUESTION 82 Which of the following plays a role in both non-functional requirement analysis and security requirement analysis? A. Scalability B. Portability C. Usability D. Performance

Correct Answer: D Explanation/Reference: Typical non-functional requirements include: Performance - for example: response time, throughput, utilization, static volumetric Scalability Capacity Availability Reliability Recoverability Maintainability Serviceability Security Regulatory Manageability Environmental Data Integrity Usability Interoperability Quality Requirements Even when organizations recognize the importance of functional end-user requirements, they often still neglect quality requirements, such as performance, safety, security, reliability, and maintainability. Some quality requirements are nonfunctional requirements, but others describe system functionality, even though it may not contribute directly to end-user requirements. As you might expect, developers of certain kinds of mission-critical systems and systems in which human life is involved, such as the space shuttle, have long recognized the importance of quality requirements and have accounted for them in software development. In many other systems, however, quality requirements are ignored altogether or treated in an inadequate way. Hence we see the failure of software associated with power systems, telephone systems, unmanned spacecraft, and so on. If quality requirements are not attended to in these types of systems, it is far less likely that they will be focused on in ordinary business systems. This inattention to quality requirements is exacerbated by the desire to keep costs down and meet aggressive schedules. As a consequence, software development contracts often do not contain specific quality requirements but rather some vague generalities about quality, if anything at all

QUESTION 98 Which of the following is performed by a software security professional reviewing the design for an application that collects user information? A. Initiate implementation and look for concerns B. Evaluate the design for Commercial Off-The-Shelf (COTS) opportunities C. Determine a threat model for the application D. Request approval from executive stakeholders for the design

Correct Answer: C Explanation/Reference: Threat modeling is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it does complement the security code review process. The inclusion of threat modeling in the SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling process, can give the reviewer a greater understanding of the system. This allows the reviewer to see where the entry points to the application are and the associated threats with each entry point. The concept of threat modeling is not new but there has been a clear mindset change in recent years. Modern threat modeling looks at a system from a potential attacker's perspective, as opposed to a defender's viewpoint. Microsoft have been strong advocates of the process over the past number of years. They have made threat modeling a core component of their SDLC, which they claim to be one of the reasons for the increased security of their products in recent years. When source code analysis is performed outside the SDLC, such as on existing applications, the results of the threat modeling help in reducing the complexity of the source code analysis by promoting an in-depth first approach vs. breadth first approach. Instead of reviewing all source code with equal focus, you can prioritize the security code review of components whose threat modeling has ranked with high risk threats. The threat modeling process can be decomposed into 3 high level steps: Step 1: Decompose the Application. The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the application will grant to external entities. This information is documented in the Threat Model document and it is also used to produce data flow diagrams (DFDs) for the application. The DFDs show the different paths through the system, highlighting the privilege boundaries. Step 2: Determine and rank threats. Critical to the identification of threats is using a threat categorization methodology. A threat categorization such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing & Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and Transit, Data Validation, Exception Management. The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker's perspective, such as data sources, processes, data flows, and interactions with users. These threats can be identified further as the roots for threat trees; there is one tree for each threat goal. From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security controls for such threats. Common threat- lists with examples can help in the identification of such threats. Use and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. The determination of the security risk for each threat can be determined using a value-based risk model such as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact). Step 3: Determine countermeasures and mitigation. A lack of protection against a threat might indicate a vulnerability whose risk exposure could be mitigated with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures. The risk mitigation strategy might involve evaluating these threats from the business impact that they pose and reducing the risk. Other options might include taking the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the least preferable option, that is, to do nothing. Each of the above steps are documented as they are carried out. The resulting document is the threat model for the application. This guide will use an example to help explain the concepts behind threat modeling. The same example will be used throughout each of the 3 steps as a learning aid. The example that will be used is a college library website. At the end of the guide we will have produced the threat model for the college library website. Each of the steps in the threat modeling process are described in detail below.

QUESTION 84 Which of the following BEST ensures the integrity of a server's Operating System (OS)? A. Protecting the server in a secure location B. Setting a boot password C. Hardening the server configuration D. Implementing activity logging

Correct Answer: C Explanation/Reference: Traditional security hardening. Goal is to get all servers to a secure state. Typically use Microsoft or other industry "best practice." Often Group Policy is used to configure security. Every server security conscious organization will have their own methods for maintaining adequate system and network security. Often you will find that server hardening consultants can bring your security efforts up a notch with their specialized expertise. Some common server hardening tips & tricks include: - Use Data Encryption for your Communications - Avoid using insecure protocols that send your information or passwords in plain text. - Minimize unnecessary software on your servers. - Disable Unwanted SUID and SGID Binaries - Keep your operating system up to date, especially security patches. - Using security extensions is a plus. - When using Linux, SELinux should be considered. Linux server hardening is a primary focus for the web hosting industry, however in web hosting SELinux is probably not a good option as it often causes issues when the server is used for web hosting purposes. - User Accounts should have very strong passwords - Change passwords on a regular basis and do not reuse them - Lock accounts after too many login failures. Often these login failures are illegitimate attempts to gain access to your system. - Do not permit empty passwords. - SSH Hardening --- Change the port from default to a non standard one --- Disable direct root logins. Switch to root from a lower level account only when necessary. - Unnecessary services should be disabled. Disable all instances of IRC - BitchX, bnc, eggdrop, generic- sniffers, guardservices, ircd, psyBNC, ptlink. - Securing /tmp /var/tmp /dev/shm - Hide BIND DNS Sever Version and Apache version - Hardening sysctl.conf - Server hardenining by installing Root Kit Hunter and ChrootKit hunter. - Minimize open network ports to be only what is needed for your specific circumstances. - Configure the system firewall (Iptables) or get a software installed like CSF or APF. Proper setup of a firewall itself can prevent many attacks. - Consider also using a hardware firewall - Separate partitions in ways that make your system more secure. - Disable unwanted binaries - Maintain server logs; mirror logs to a separate log server - Install Logwatch and review logwatch emails daily. Investigate any suspicious activity on your server. - Use brute force and intrusion detection systems - Install Linux Socket Monitor - Detects/alerts when new sockets are created on your system, often revealing hacker activity - Install Mod_security as Webserver Hardening - Hardening the Php installation - Limit user accounts to accessing only what they need. Increased access should only be on an as-needed basis. - Maintain proper backups - Don't forget about physical server security

QUESTION 51 Which of the following are sources of security violations that are difficult to capture in abuse cases? A. High-level system behaviors B. Timing and repetition issues C. Mischief and vandalism D. Exceptional conditions

Coorect Answer: D Explanation/Reference: An abuse might not capture a zero-day attack. Abuse cases, are designed and used to construct a structured set of arguments and a corresponding body of evidence to satisfactorily demonstrate specific claims with respect to its security properties. An assurance case is structured like a legal case. An overall objective is defined. Specific elements of evidence are presented that demonstrate conclusively a boundary of outcomes that eliminates undesirable ones and preserves the desired ones. When sufficient evidence is presented to eliminate all undesired states or outcomes, the system is considered to be assured of the claim. Misuse cases can present commonly known attack scenarios, and are designed to facilitate communication between designers, developers, and testers to ensure that potential security holes are managed in a proactive manner. Misuse cases can examine a system from an attacker's point of view, whether the attacker is an inside threat or an outside one. Properly constructed misuse cases can trigger specific test scenarios to ensure known weaknesses have been recognized and dealt with appropriately before deployment.

QUESTION 101 The goal of completing attack trees is to assure that A. there is a complete set of misuse cases B. all brute force attacks are covered C. input validation attacks are covered D. there is a complete set of use cases

Correct Answer: A Explanation/Reference: Attack trees are conceptual diagrams showing how an asset, or target, might be attacked. Attack trees have been used in a variety of applications. In the field of information technology, they have been used to describe threats on computer systems and possible attacks to realize those threats.

QUESTION 81 Which of the following describes the BEST approach to provide comprehensive software build security? A. Implement a multilevel program that incorporates revision control security, vulnerability management, and developer identity management B. Implement a program that focuses on physical security of the build environment, binary/release bit-level assurance, and Information Technology (IT) infrastructure security C. Implement a program that locks down developer workstations and has two-factor authentication to revision tracking systems D. Implement a multilevel program that incorporates physical security, revision control tracking, binary/release bit-level assurance, and IT infrastructure security

Correct Answer: A Section: (none) Explanation Explanation/Reference: About Signing Identities and Certificates Code signing your app allows the operating system to identify who signed your app and to verify that your app hasn't been modified since you signed it. Your app's executable code is protected by its signature because the signature becomes invalid if any of the executable code in the app bundle changes. Note that resources such as images and nib files aren't signed; therefore, a change to these files doesn't invalidate the signature. Code signing is used in combination with your App ID, provisioning profile, and entitlements to ensure that: Your app is built and signed by you or a trusted team member. Apps signed by you or your team run only on designated development devices. Apps run only on the test devices you specify. Your app isn't using app services you didn't add to your app. Only you can upload builds of your app to iTunes Connect. If you choose to distribute outside of the store (Mac only), the app can't be modified and distributed by someone else.

QUESTION 37 Static code analysis tools operate on source code in ways that are very similar to which of the following? A. Compiler B. Debugger C. Fuzz tester D. Disassembler

Correct Answer: A Explanation/Reference: A compiler is computer software that transforms computer code written in one programming language (the source language) into another computer language (the target language). Compilers are a type of translator that support digital devices, primarily computers. A common reason for compilation is converting source code into a binary form known as object code to create an executable program. The name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language (e.g., assembly language or machine code).

QUESTION 89 What are the BEST means organizations can use to solicit user consent for monitoring? A. Acceptable Use Policy (AUP) and log-in banners B. New-employee briefings and organizational Internet surfing policies C. Log-in banners and Organizational Standard Operating Procedures (SOP) D. Organizational SOP and new-employee briefings

Correct Answer: A Explanation/Reference: A consent banner will be in place to make prospective entrants aware that the web site they are about to enter is a DoD web site and their activity is subject to monitoring. The May 9, 2008 Policy on Use of Department of Defense (DoD) Information Systems Standard Consent Banner and User Agreement, establishes interim policy on the use of DoD information systems. It requires the use of a standard Notice and Consent Banner and standard text to be included in user agreements. The requirement for the banner is for web sites with security and access controls. These are restricted and not publicly accessible. If the web site does not require authentication/authorization for use, then the banner does not need to be present. A manual check of the document root directory for a banner page file (such as banner.html) or navigation to the web site via a browser can be used to confirm the information provided from interviewing the web staff. An acceptable use policy (AUP) is a document stipulating constraints and practices that a user must agree to for access to a corporate network or the Internet. Many businesses and educational facilities require that employees or students sign an acceptable use policy before being granted a network ID.When you sign up with an Internet service provider (ISP), you will usually be presented with an AUP, which states that you agree to adhere to stipulations such as: Not using the service as part of violating any law Not attempting to break the security of any computer network or user Not posting commercial messages to Usenetgroups without prior permission Not attempting to send junk e-mail or spamto anyone who doesn't want to receive it Not attempting to mail bomb a site with mass amounts of e-mail in order to flood their server Users also typically agree to report any attempt to break into their accounts.

QUESTION 41 Which of the following is a useful activity in gathering and documenting software security requirements? A. Abuse case analysis B. Selection of programming language C. Assessment of middleware technologies D. Review existing Commercial-Off-The-Shelf (COST) software

Correct Answer: A Explanation/Reference: Abuse case is a specification model for security requirements used in the software development industry. The term Abuse Case is an adaptation of use case. The term was introduced by John McDermott and Chris Fox in 1999, while working at Computer Science Department of the James Madison University. As defined by its authors, an abuse case is a type of complete interaction between a system and one or more actors, where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system. We cannot define completeness just in terms of coherent transactions between actors and the system. Instead, we must define abuse in terms of interactions that result in actual harm. A complete abuse case defines an interaction between an actor and the system that results in harm to a resource associated with one of the actors, one of the stakeholders, or the system itself.

QUESTION 83 Which security model focuses on integrity? A. Biba B. Bell-LaPadula C. Graham-Denning D. Brewer and Nash

Correct Answer: A Explanation/Reference: Biba Integrity Model In the Biba model, integrity levels are used to separate permissions. The principle behind integrity levels is that data with a higher integrity level is believed to be more accurate or reliable than data with a lower integrity level. Integrity levels indicate the level of "trust" that can be placed in the accuracy of information based on the level specified. The Biba model employs two rules to manage integrity efforts. The first rule is referred to as the low-water-mark policy, or "no-write-up" rule. This policy in many ways is the opposite of the *-property from the Bell LaPadula model, in that it prevents subjects from writing to objects of a higher integrity level. The Biba model's second rule states that the integrity level of a subject will be lowered if it acts on an object of a lower integrity level. The reason for this is that if the subject then uses data from that object, the highest the integrity level can be for a new object created from it is the same level of integrity of the original object. In other words, the level of trust you can place in data formed from data at a specific integrity level cannot be higher than the level of trust you have in the subject creating the new data object, and the level of trust you have in the subject can only be as high as the level of trust you had in the original data.

QUESTION 104 Which of the following data masking techniques will maintain the type and structure of the data while preventing re-identification of a production data set? A. Replacement B. Suppression C. Generalization D. Perturbation

Correct Answer: A Explanation/Reference: Data Masking Techniques vs. Other Approaches Data Integrity Data security approaches can first be examined along a dimension that considers how well the solution preserves the usability of the data for non-production use cases such as application development, testing, or analytics. When the solution is applied to sensitive data, do the resulting values look, feel, and operate like the real thing? True data masking techniques such as shuffling (randomly switching values within a column) or substitution (a given value is mapped to an equivalent value in a secure lookup table) transform confidential information while preserving the integrity of the data. On the other hand, nulling out values, character scrambling, or data redaction ("X-ing" out characters or full values) may render transformed datasets useless to an end user. For example, data validation checks built into front-end systems may reject nulled or redacted data, preventing testers from verifying application logic. Reversibility A second key characteristic that separates data masking techniques from alternative approaches is reversibility. Data masking techniques irreversibly transform data: Once data has been masked, the original values cannot be restored through a reverse engineering process. This characteristic makes data masking especially suitable for non-production use cases such as development and testing in which end users have no need see original values. This also makes data masking very different from encryption technologies where reversibility is a purposefully designed into the solution. Encryption relies on the availability of keys that allow authorized users to restore encoded values into readable ones. While encryption methods may be suitable for transmitting data-at-rest or protecting the contents of mobile devices and laptops, they do not necessarily protect organizations from insiders or other actors with access to decryption keys, or from hackers who are able to crack encryption schemes. Data Delivery Next-generation data masking solutions can be integrated with data virtualization technologies to allow users to move data to downstream environments in minutes. The ability to leverage data virtualization is critical: Non- production data environments are continually provisioned and refreshed, making the ability to quickly move secure data of paramount importance. In contrast, other data security approaches (and legacy masking solutions) lack delivery capabilities, instead relying on slow batch processes to extract, transform, and load data into a non-production target.

QUESTION 52 Dynamic security testing of a third-party software application is MOST useful in assessing which of the following? A. Run-time security posture of the application B. Behaviour analysis of the application indicative of security vulnerabilities C. Pedigree and provenance of the software D. Specific location of security vulnerabilities in the third-party source code

Correct Answer: A Explanation/Reference: Dynamic testing (or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time. Dynamic testing is a method of assessing the feasibility of a software program by giving input and examining output (I/O). The dynamic method requires that the code be compiled and run. The alternative method of software testing, static testing, does not involve program execution but an examination of the code and associated documents. Types of dynamic testing include unit testing, integration testing, system testing and acceptance testing.

QUESTION 22 Which of the following is the MOST appropriate defense against buffer overflow attacks? A. Validate user input for length B. Allocate memory on the heap instead of the stack C. Convert strings to Unicode D. Use stacks-checking compilers

Correct Answer: A Explanation/Reference: In computer science, data validation is the process of ensuring that a program operates on clean, correct and useful data. It uses routines, often called "validation rules" "validation constraints" or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system.

QUESTION 103 What is the PRIMARY technique for evaluating the potential impact of identified security flaws? A. Perform a risk analysis B. Execute static code analysis C. Execute a dynamic code scan D. Perform an authenticated scan

Correct Answer: A Explanation/Reference: Quantitative risk analysis (QRA) seeks to numerically assess probabilities for the potential consequences of risk, and is often called probabilistic risk analysis or probabilistic risk assessment (PRA). The analysis often seeks to describe the consequences in numerical units such as dollars, time, or lives lost. PRA often seeks to answer three questions: What can happen? (i.e., what can go wrong?) How likely is it that it will happen? If it does happen, what are the consequences?

QUESTION 7 Which of the following is the FIRST step a software tester would perform in security testing? A. Threat model to prioritize program components from highest risk to lowest risk B. Use common attack patterns to attack the application's attack surface through fault injection C. Enumerate the system for common security design errors D. Inspect the system for commons security design errors

Correct Answer: A Explanation/Reference: SearchSoftwareQuality.com asked application security experts to identify and address security concerns at each stage of the application lifecycle and to suggest tools and techniques to boost security. Here is the advice they offered. 1. Conduct threat modeling at the outset on an app development project. Threat modeling refers to the process of figuring out how many different ways an attacker could harm an application before that application is actually developed, said Wendy Nather, research director for the enterprise security practice at 451 Research LLC, a research firm based in New York. "Can you break into it, commit fraud, steal from it? That is what you are trying to answer," she said. The best threat models graphically depict things such as how data will flow and how it will be stored, said Dan Cornell, a principal at security consultancy Denim Group Ltd. in San Antonio. "The idea is to proactively determine what kinds of security things can go wrong." It's crucial to understand these issues at the outset of the development process because it's cheaper to address security concerns when an app is "just a drawing on a whiteboard," he said.

QUESTION 4 A strong application architecture that provides separation and security between components is essential for preventing which of the following? A. Security misconfiguration B. Cross-Site Scripting (XSS) attacks C. Tampering with source code modules D. Security breaches due to weak cryptographic algorithms

Correct Answer: A Explanation/Reference: Security misconfiguration can happen at any level of an application stack, including the platform, web server, application server, database, framework, and custom code. Developers and system administrators need to work together to ensure that the entire stack is configured properly. Automated scanners are useful for detecting missing patches, misconfigurations, use of default accounts, unnecessary services, etc.

QUESTION 45 A security architect on a development project requests obfuscation of source code implementing cryptographic functions to limit disclosure of the cryptosysem. This violates which security principle? A. Open design B. Need to know C. Least privilege D. Phychological acceptability

Correct Answer: A Explanation/Reference: The eight software security design principles are: 1. Principle of Least Privilege A subject should be given only those privileges that it needs in order to complete its task. The function of a subject should control the assignment of rights, not the identity of the subject. This means that if your boss demands root access to a UNIX system that you administer, she should not be given that privilege unless she absolutely has a task that requires such level of access. If possible, the elevated rights of an identity individual should be removed as soon as those rights are no longer required. e.g. sudo su programs set uid only when needed 2. Principle of Fail-Safe Defaults Unless a subject is given explicit access to an object, it should be denied access to that object. This principle restricts how privileges are initialized when a subject or object is created. Basically, this principle is similar to the "Default Deny" principle that we talked about in the 6 dumbest ideas in computer security. Whenever access, privilege, or some other security related attribute is not granted, that attribute should be denied by default. 3. Principle of Economy of Mechanism Security mechanisms should be as simple as possible. This principle simplifies the design and implementation of security mechanisms. If the design and implementation are simple, fewer possibilities exist for errors. The checking and testing process is less complex. Interfaces between security modules are suspect area and should be as simple as possible. 4. Principle of Complete Mediation All accesses to objects should be checked to ensure that they are allowed. This principle restricts the caching of information, which often leads to simpler implementations of mechanisms. Every time that someone tries to access an object, the system should authenticate the privileges associated with that subject. What happens in most systems is that those privileges are cached away for later use. The subject's privileges are authenticated once at the initial access. For subsequent accesses the system assumes that the same privileges are enforce for that subject and object. This may or may not be the case. The operating system should mediate all and every access to an object. e.g. DNS information is cached What if it is poisoned? 5. Principle of Open Design The security of a mechanism should not depend on the secrecy of its design or implementation. This principle suggests that complexity does not add security. This concept captures the term "security through obscurity". This principle not only applies to cryptographic systems but also to other computer security related systems. e.g. DVD player & Content Scrambling System (CSS) protection 6. Principle of Separation of Privilege A system should not grant permission based on a single condition. This principle is restrictive because it limits access to system entities. The principle is similar to the separation of duty principle that we talked about in the integrity security policy unit. Thus before privilege is granted two or more checks should be performed. e.g. to su (change) to root two conditions must be met 1. the user must know the root password 2. the user must be in the right group (wheel) 7. Principle of Least Common Mechanism Mechanisms used to access resources should not be shared. This principle is also restrictive because it limits sharing of resources. Sharing resources provides a channel along which information can be transmitted. Hence, sharing should be minimized as much as possible. If the operating system provides support for virtual machines, the operating system will enforce this privilege automatically to some degree. 8. Principle of Psychological Acceptability Security mechanisms should not make the resource more difficult to access than if the security mechanism were not present. Do you believe this? This principle recognizes the human element in computer security. If security-related software or systems are too complicated to configure, maintain, or operate, the user will not employ the requisite security mechanisms. For example, if a password is rejected during a password change process, the password changing program should state why it was rejected rather than giving a cryptic error message. At the same time, programs should not impart unnecessary information that may lead to a compromise in security. In practice, the principle of psychological acceptability is interpreted to mean that the security mechanism may add some extra burden, but that burden must be both minimal and reasonable. e.g. When you enter a wrong password, the system should only tell you that the user id or password was wrong. It should not tell you that only the password was wrong as this gives the attacker information.

QUESTION 23 An Internet facing web application has been deployed in the Demilitarized Zone (DMZ) of an organization's network. The web application must communicate to a database server located inside the organization's network. What is the most important criteria? A. Pass authenticated user credentials for data retrieval B. Use integrated security from the web application C. Exchange public keys for server sessions D. Require encrypted connections to the database

Correct Answer: A Explanation/Reference: To configure the SSL encryption: http://msdn.microsoft.com/en-us/library/ms189067.aspx. My web application uses a connection string to access a database. The connection string is secured and encrypted... but when the connection is active, couldn't someone intercept that connection and effectively "sniff" what is being stored and read? Maybe. In a typical small web applications configuration, you'd be running your database on the same machine as your web server and application. In this case, the connection between your application and the database would be protected by operating system, which enforces that loopback address or domain socket can only be accessed from local processes. There's little to gain here from encrypting the connection. In another setup, you might have a physical wire connecting two machines on the same rack. The rack might be placed in a secured data center in a caged rack, where you're happy with the data center's security parameter, or it might be in a locked closet in the office, and you're happy with trusting the employees not trying to deliberately break into the closet. In this case, adding encryption might be overkill. In a similar situation, as previous paragraph, but now you have an outdated router between your machines. Problem is that this router runs outdated software with known vulnerabilities, or you don't trust its Virtual Networking configuration, or it doesn't support such features, or you're worried about remote takeover of the router's web administration. Then you might want to add TLS encryption and mutual authentication to cover up deficiencies in the outdated router. In another typical case, you might run your database and web server in multiple VMs in a public cloud provider, you might want to add inter machine encryption, or you might decide that the cloud provider's network access control is not trustworthy enough for the kind of data you're dealing with. You might run a VPN between the machines or an encrypted overlay network on top of the existing network infrastructure.

QUESTION 16 An organization has developed an application for reviewing and approving system logs for user access. When multiple users are approving records, which of the following is the MOST effective approach to maintain the data integrity? A. Restrict users from performing simultaneous updates B. Limit application access to a single user session C. Allow users only to access unapproved records D. Lock all records upon user access

Correct Answer: A Explanation/Reference: When data is locked, then that means that another database session can NOT update that data until the lock is released (which unlocks the data and allows other database users to update that data. Locks are usually released by either a ROLLBACK or COMMIT SQL statement.

QUESTION 88 A security professional is asked to evaluate the end-of-life issues related to the lack of support for a legacy application. Which of the following is the HIGHEST priority? A. Create a successful translation plan for a new product that will securely support the customer B. Review the organization's business requirements to see what would be the best alternative C. Create a good support plan that will address the current support problems D. Make a proposal to convert the critical data on the legacy application to a better product

Correct Answer: A Explanation/Reference: Transition Planning Release Process <Document the release process for the deliverable software. Refer to any Configuration Management standards that define an acceptable release, and indicate how Configuration Management will be applied to the operational software product.> Data Migration <Describe any data that must be migrated into the deliverable software product. List any special issues with regard to data reconstruction or the migration of historical data.> Problem Resolution <Specify the procedure for identifying, tracking, and resolving problems with the operational software product.> Transition Schedule <Develop a detailed schedule for transition. Include a breakdown of roles and responsibilities. Address transition through the development, operation, maintenance, and support phases of the software product. Note critical time dependencies on the software support products listed in this document.>

QUESTION 40 A vulnerability is discovered in a mission-critical piece of legacy code that cannot be updated to meet the organization's security standards? Which of the following MOST be done? A. Create a contingency plan B. Apply for a policy exception C. Update the security guidelines D. Draft an acceptance waiver

Correct Answer: A Explanation/Reference: A contingency plan is a course of action designed to help an organization respond effectively to a significant future event or situation that may or may not happen. A contingency plan is sometimes referred to as "Plan B," because it can be also used as an alternative for action if expected results fail to materialize. Contingency planning is a component of business continuity, disaster recovery and risk management. The seven-steps outlined for an IT contingency plan in the NIST 800-34 Rev. 1 publication are: 1. Develop the contingency planning policy statement. A formal policy provides the authority and guidance necessary to develop an effective contingency plan. 2. Conduct the business impact analysis (BIA). The BIA helps identify and prioritize information systems and components critical to supporting the organization's mission/business functions. 3. Identify preventive controls. Measures taken to reduce the effects of system disruptions can increase system availability and reduce contingency life cycle costs. 4. Create contingency strategies. Thorough recovery strategies ensure that the system may be recovered quickly and effectively following a disruption. 5. Develop an information system contingency plan. The contingency plan should contain detailed guidance and procedures for restoring a damaged system unique to the system's security impact level and recovery requirements. 6. Ensure plan testing, training and exercises. Testing validates recovery capabilities, whereas training prepares recovery personnel for plan activation and exercising the plan identifies planning gaps; combined, the activities improve plan effectiveness and overall organization preparedness. 7. Ensure plan maintenance. The plan should be a living document that is updated regularly to remain current with system enhancements and organizational changes

QUESTION 97 An assurance plan that addresses the development and maintenance of an assurance case for software is found in which of the following? A. Request for Proposal (RFP) B. Work statement C. Proposal evaluation D. Needs statement

Correct Answer: B Explanation/Reference: A Statement of Work (SOW) is a document routinely employed in the field of project management. It defines project-specific activities, deliverables and timelines for a vendor providing services to the client.

QUESTION 2 An organization deploys an Internet web application. A security researcher sends an e-mail to a mailing list stating that the web application is susceptible to Cross-Site Scripting (XSS) What type of testing could BEST discover this type of vulnerability? A. Simulation testing B. Automated regression testing C. Fuzz testing D. Integration testing

Correct Answer: C Explanation/Reference: Intellifuzz is a Python script designed to not only determine if a cross-site scripting attack is possible, but also determine the exact payload needed to cleanly break out of the code. Many web scanners and fuzzers operate by using a long list of possible payloads and recording the response to see if the payload is reflected. However, just because a payload is reflected does not mean it will execute. For example, if the payload is reflected as an HTML attribute, a carefully crafted string must be created to first break out of the attribute using quotes, then potentially break out of the tag, then finally launch the script. Intellifuzz aims to take care of crafting the payload for you by first detecting the location of the parameter reflection, then using a number of tests to determine what characters are needed to cause a successful execution.

QUESTION 50 Which of the following is the MOST important practice in designing secure applications? A. Completing design checklists that verify compliance aspects B. Following control frameworks to evaluate security posture C. Replying on common trusted design patterns D. Removing components which have not been approved by management for use

Correct Answer: B Explanation/Reference: A common security framework includes Foundation, Principles and Design Guidelines encompassing and adhering to basic tenets of information security for the development of secure applications. Why: Secure applications protect information assets and contribute to the defense of the company's computing environment and the mission of the company. Framework Outline Foundation The basics; what you need to know before writing a single line of code. · Know your Security Policy, Standards, Guidelines and Procedures · Know your Development Methodology · Know your Programming Language and Compiler/Interpreter Principles Fundamental rules to be followed when writing the code. · Security is part of the design. · Assume a hostile environment. · Use open standards. · Minimize and protect system elements to be trusted. · Protect data at the source. · Limit access to need-to-know. · Authenticate. · Do not subvert in-place security protections. · Fail Securely. · Log. Monitor. Audit. · Accurate system date and time. Design Guidelines Some basic best practices with bite. Proven and successful methods for implementing the code. · Input Validation · Exception handling · Cryptography · Random Numbers · Canonical Representation

QUESTION 75 Who is responsible for authorizing access to application data? A. Data owner B. Data custodian C. Security administrator D. Database administrator (DBA)

Correct Answer: B Explanation/Reference: A data custodian ensures: Access to the data is authorized and controlled Data stewards are identified for each data set Technical processes sustain data integrity Processes exist for data quality issue resolution in partnership with Data Stewards Technical controls safeguard data Data added to data sets are consistent with the common data model Versions of Master Data are maintained along with the history of changes Change management practices are applied in maintenance of the database Data content and changes can be audited

QUESTION 8 Backward compatibility can affect a system in which of the following ways? A. Make it more secure by incorporating good security earlier in its lifecyle B. Make it less secure by making it difficult to replace less secure protocols C. Make it more securer by forcing a system to be more configurable D. Make it less secure by exposing less secure defaults in a system

Correct Answer: B Explanation/Reference: Backward compatible (or sometimes backward-compatible or backwards compatible) refers to a hardware or software system that can successfully use interfaces and data from earlier versions of the system or with other systems. For example, Perl, the scripting language, was designed to be backward compatible with awk, an earlier language that Perl was designed to replace. Backward compatibility is more easily accomplished if the previous versions have been designed to be forward compatible, or extensible, with built-in features such as Hooks, plug-in, or an application program interface (API) that allows the addition of new features. If you keep NTM authentication in addition to Kerberos authentication, then a newer version of Windows is less secure.

QUESTION 61 Which of the following is the BEST indicator of possible malicious code when performing security code review? A. Lack of descriptive comments in the code B. Code obviously difficult to understand C. Object creation mistakes D. Hard-coded database passwords

Correct Answer: B Explanation/Reference: Code obviously difficult to understand could mean that the developer was trying to hide a malicious feature

QUESTION 92 The purpose of code signing is to provide assurance of which of the following? A. The signer of the application is trusted B. The software has not been subsequently modified C. The private key of the signer has not been compromised D. The application can safely interface with another signed application

Correct Answer: B Explanation/Reference: Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity. Code Signing Certificates are used by software developers to digitally sign apps, drivers, and software programs as a way for end-users to verify that the code they receive has not been altered or compromised by a third party. They include your signature, your company's name, and if desired, a timestamp.

QUESTION 56 An attacker wants to spoof updates to a program that runs on a client. Which design element will help prevent this attack? A. Hardcode an Internet Protocol (IP) address for the update server B. Updates must be signed and validated by the client C. The client update program calls back the server using the server's fully qualified name D. Client validates checksum information on the updates

Correct Answer: B Explanation/Reference: Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity. Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other meta data about an object. The efficacy of code signing as an authentication mechanism for software depends on the security of underpinning signing keys. As with other public key infrastructure (PKI) technologies, the integrity of the system relies on publishers securing their private keys against unauthorized access. Keys stored in software on general-purpose computers are susceptible to compromise. Therefore, it is more secure, and best practice, to store keys in secure, tamper-proof, cryptographic hardware devices known as hardware security modules or HSMs.

QUESTION 1 An organization has signed a contract to build a large Information System (IS) for the United States government. Which framework, guideline, or standard would BEST meet government information processing requirements? A. Control Objectives for Information and Related Technology (COBIT) B. Information Technology Infrastructure Library (ITIL) C. National Institute of Standards and Technology (NIST) D. International Organization for Standardization (ISO) 27000

Correct Answer: C Explanation/Reference: NIST Special Publication 800-171 is rapidly emerging as the benchmark used by the civilian US government and Department of Defense for evaluating the security and privacy posture of nonfederal organizations that seek to contract with the US government.

QUESTION 36 Which of the following is MOST likely to be reported to the stakeholders at the end of each phase of the Software Development Life Cycle (SDLC)? A. Key success factors, deliverables, and costs B. Key success factors, deliverables, and metrics C. Listing of all the vulnerabilities that have been corrected D. Listing of all the vulnerabilities that have been identified

Correct Answer: B Explanation/Reference: Costs are included in key success factors. While measuring the project success it is not enough to verify if the system works or not, it is important to verify how well the system is being used by the user community. That is the true measurement of project success. It is important to have the project metrics based on this guideline so as to measure the project success better. First let us understand the meaning of metrics. Project metrics essentially are objectively measurable parameters pertaining to the project. They play a major role in project control. Traditionally the project metrics were focused on the project deliverable success measurement alone. But this outlook is changing now and is based not only on project justification, but also on the project acceptance by the user community and the resulting ROI. The project metrics for success should be able to measure the benefits of the project to the core business units and also should capture the when and what of project success measurement

QUESTION 31 What is the BEST first step in creating a data lifecycle for an organization? A. Document all data flows B. Define all key and active data C. Rate data by data classification D. Perform a data storage analysis

Correct Answer: B Explanation/Reference: Data life cycle management (DLM) is a policy-based approach to managing the flow of an information system's data throughout its life cycle: from creation and initial storage to the time when it becomes obsolete and is deleted. DLM products automate the processes involved, typically organizing data into separate tiers according to specified policies, and automating data migration from one tier to another based on those criteria. As a rule, newer data, and data that must be accessed more frequently, is stored on faster, but more expensive storage media, while less critical data is stored on cheaper, but slower media. Data Lifecycle Data in the enterprise has a lifecycle. It can be created, used, stored, and even destroyed. Although data storage devices have come down in price, the total cost of storing data in a system is still a significant resource issue. Data that is stored must also be managed from a backup and business continuity/disaster recovery perspective. Managing the data lifecycle is a data owner's responsibility. Ensuring the correct sets of data are properly retained is a business function, one that is best defined by the data owner.

QUESTION 63 Which of the following BEST describes a responsibility of the information asset owner? A. Monitoring compliance to the classroom controls by regularly reviewing security controls B. Classifying the information asset in relation to its value and impact on the business, as expressed in terms of confidentiality, integrity, and availability C. Classifying the information asset in relation to the secure storage format that the software must employ, including the specification of valid encryption algorithm D. Establishing procedures and standards to protect the information asset in relation to its value and impact to the business as expressed in terms of confidentiality, integrity, and availability

Correct Answer: B Explanation/Reference: Data owners act in the interests of the enterprise in managing certain aspects of data. The data owner is the party who determines who has specific levels of access associated with specific data elements: who can read, who can write, who can change, delete, and so on. The owner is not to be confused with the custodian, the person who actually has the responsibility for making the change. A good example is in the case of database records. The owner of the data for the master chart of accounts in the accounting system may be the chief financial officer (CFO), but the ability to directly change it may reside with a database administrator (DBA).

QUESTION 39 When transitioning one vendor to another, which issue could create the GREATEST risk related to the potential exposure of Personally Identifiable Information (PII) or personal data? A. Data interference B. Data remanence C. Data aggregation D. Proprietary formats

Correct Answer: B Explanation/Reference: Data remanence is the residual representation of digital data that remains even after attempts have been made to remove or erase the data.

QUESTION 18 Why is a system property used when calculating a Message Digest (MD) of a password? A. To make the digest harder to decrypt B. To improve the randomness of the digest C. To make digests of the same password from other systems unusable D. To provide a hint to the system user to aid in remembering the password

Correct Answer: B Explanation/Reference: Digests are not encrypted. Lookup tables and rainbow tables only work because each password is hashed the exact same way. If two users have the same password, they'll have the same password hashes. We can prevent these attacks by randomizing each hash, so that when the same password is hashed twice, the hashes are not the same. We can randomize the hashes by appending or prepending a random string, called a salt, to the password before hashing. As shown in the example above, this makes the same password hash into a completely different string every time. To check if a password is correct, we need the salt, so it is usually stored in the user account database along with the hash, or as part of the hash string itself. The salt does not need to be secret. Just by randomizing the hashes, lookup tables, reverse lookup tables, and rainbow tables become ineffective. An attacker won't know in advance what the salt will be, so they can't pre-compute a lookup table or rainbow table. If each user's password is hashed with a different salt, the reverse lookup table attack won't work either.

QUESTION 35 To prevent hijacking attacks through client interfaces on a distributed platform, which of the following actions MUST be taken? A. Break method calls into multiple packets B. Authenticate every method call in an object individually C. Add a checksum to each packet to avoid tampering D. Implement enterprise containers for packet management

Correct Answer: B Explanation/Reference: Each method call should be individually authenticated. Containers are not for every app. To get beyond cool, let me share the good, the bad, and the ugly about containers in the context of enterprise adoption. Containers: What's good for enterprises Containers work well as an architecture. The concept of containers is not original, so it's understood in the enterprise—even if under other labels. The technology is also solid. For example, the new Docker instances are well designed, using a centralized repository, and they can scale thanks to cluster managers such as Docker Swarm and Google Kubernetes. All major public clouds support containers, including AWS, Google, IBM, and Microsoft. As a result, most promises made by container proponents are met. Moreover, if you use containers, they will lead you down the path to sound design of distributed, portable applications. What's not to like? Containers: What's bad for enterprises Containers have not been that successful when used for older applications. Although they're an easy fit for new apps, they are too complex for older applications not designed from the ground up for containers. The cost of moving existing applications to containers is more than proponents originally imagined. You have to redesign the applications to make the most of the container architecture, and that means more money, more time, and more risk. Thus, enterprises shy away from containerizing older applications. Containers: What's ugly for enterprises What's ugly about containers is the confusion that the ignorance about them creates. Rarely do I run into enterprises that understand the value of containers. For example, there's a lot of discussion about choosing between containers and virtualization, but that debate has nothing to do with the real value of this technology. The only way to combat the ignorance is education. But the hype leads the day, not education, so confusion will be with us until IT proactively educates itself rather than rely on the marketing by proponents and vendors. There is thoughtful, useful marketing available, but it's hard to identify them until you understand the underlying issues of containers. The confusion around containers is no different than the confusion around any new technology. But it's different from many previous trumpeted technologies in that containers are quite useful and worth implementing. The fact that container technologies have come at the same time that enterprises are moving applications to the cloud—and thus requiring decisions regarding what to do with older applications—could be the perfect storm of technology value. Done naively, however, it could be the perfect storm of misapplied technologies.

QUESTION 70 Which of the following is a characteristic of poly-instantiation when securing databases? A. Gives incorrect answer irrespective of user's access level B. Gives correct answer irrespective of user's access level C. Gives answer based on sensitivity label D. Allows processing of different data types

Correct Answer: C Explanation/Reference: Polyinstantiation in computer science is the concept of type (class, database row or otherwise) being instantiated into multiple independent instances (objects, copies). It may also indicate, such as in the case of database polyinstantiation, that two different instances have the same name (identifier, primary key).

QUESTION 11 Which of the following BEST facilitates traceability of a supply chain? A. Establish a Service Level Agreement (SLA) B. Identify elements, processes, and actors C. Assure sustainment activities and processes D. Strengthen delivery mechanisms

Correct Answer: B Explanation/Reference: Five key things you need to know about traceability and how it affects your business: One of the most critical drivers for transparency in supply chains is an increased consumer demand to know more about the products they are buying: what is in them, where they came from, the conditions under which they were made, how they got to the consumer, and how they will be disposed of. Traceability must be a collaborative effort between companies and stakeholders. The most successful traceability schemes are multi-stakeholder, involving business, government and other stakeholders and organizations who have an interest in the sustainability of the product or commodity. To ensure traceability along the supply chain, a system is needed that records and follows the trail as products, parts and materials come from suppliers and are processed and ultimately distributed as end products - including information on the components of products, parts and materials, product quality, safety and labelling. Traceability is a useful tool for companies to prove claims and attributes of sustainable products. Traceability certifications are becoming validated as proof of sustainability requirements. As more governments and companies adopt the requirement of proof that a company meets sustainability requirements (particularly in regard to procurement), traceability becomes a viable and appealing way for businesses to meet both sustainability requirements and expectations of their customers.

QUESTION 85 For a web application which of the following BEST explains why a password should be securely hashed before storing? A. To prevent the browser from caching the password? B. To prevent compromise of credentials leaked from a system C. To reduce the risk of users choosing weak passwords D. To reduce the risk of Man-in-the-Middle attack

Correct Answer: B Explanation/Reference: In cryptography, a salt is random data that is used as an additional input to a one-way function that "hashes" a password or passphrase. The primary function of salts is to defend against dictionary attacks or against its hashed equivalent, a pre-computed rainbow table attack.

QUESTION 58 Provenance metadata should be an element of which of the following? A. Proactively checking,maintaining, and improving data quality B. Proactively reusing,maintaining, and improving data quality C. Proactively reusing,maintaining, and improving data resilience D. Proactively checking,maintaining, and improving data resilience

Correct Answer: B Explanation/Reference: Provenance metadata is Information concerning the creation, attribution, or version history of managed data. Provenance metadata that indicates the relationship between two versions of data objects and is generated whenever a new version of a dataset is created. Examples include: (i) the name of the program that generated the new version, (ii) the commit id of the program in a code version control system like GitHub, (iii) the identifiers of any other datasets or data objects that may have been used in creating the new version. Provenance information is gathered along the data lifecycle as part of curation processes. A finer level of provenance metadata would be concerned only with data flowing between various stores such as curated databases and managed repositories. Provenance metadata is designed to allow queries over the relationship between versions, and includes either or both fine-grained and coarse-grained provenance data. Different applications may store different provenance data. Provenance refers to the sources of information, such as entities and processes, involved in producing or delivering an artifact. The provenance of information is crucial to making determinations about whether information is trusted, how to integrate diverse information sources, and how to give credit to originators when reusing information. In an open and inclusive environment such as the Web, users find information that is often contradictory or questionable. People make trust judgments based on provenance that may or may not be explicitly offered to them. Reasoners in the Semantic Web will need explicit representations of provenance information in order to make trust judgments about the information they use. With the arrival of massive amounts of Semantic Web data (eg, via the Linked Open Data community) information about the origin of that data, ie, provenance, becomes an important factor in developing new Semantic Web applications. Therefore, a crucial enabler of the Semantic Web deployment is the explicit representation of provenance information that is accessible to machines, not just to humans.

QUESTION 44 During the requirements phase of the Software Development Life Cycle (SDLC), security MUST be defined as which of the following? A. Set of features B. High-level requirements C. Part of the threat model requirements D. Series of abuse cases

Correct Answer: B Explanation/Reference: Requirements Gathering/Analysis. This phase is critical to the success of the project. Expectations (whether of a client or your team) need to be fleshed out in great detail and documented. This is an iterative process with much communication taking place between stakeholders, end users and the project team. The following techniques can be used to gather requirements: Identify and capture stakeholder requirements using customer interviews and surveys. Build multiple use cases to describe each action that a user will take in the new system. Prototypes can be built to show the client what the end product will look like. Tools like Omnigraffle, HotGloo and Balsalmiq are great for this part of the process. In a corporate setting, this means taking a look at your customers, figuring out what they want, and then designing what a successful outcome would look like in a new bit of software. Requirement gathering and analysis: Business requirements are gathered in this phase. This phase is the main focus of the project managers and stake holders. Meetings with managers, stake holders and users are held in order to determine the requirements like; Who is going to use the system? How will they use the system? What data should be input into the system? What data should be output by the system? These are general questions that get answered during a requirements gathering phase. After requirement gathering these requirements are analyzed for their validity and the possibility of incorporating the requirements in the system to be development is also studied. Finally, a Requirement Specification document is created which serves the purpose of guideline for the next phase of the model. The testing team follows the Software Testing Life Cycle and starts the Test Planning phase after the requirements analysis is completed.

QUESTION 48 Software risk management is driven by which of the following? A. Product managers and developers B. Business context C. Developers and Quality Assurance (QA) D. Risk mitigation context

Correct Answer: B Explanation/Reference: Risk appetite is the total exposed amount that an organization wishes to undertake on the basis of risk-return trade-offs for one or more desired and expected outcomes. As such, risk appetite is inextricably linked with— and may vary according to—expected returns. Risk appetite statements may be expressed qualitatively and/or quantitatively and managed with respect to either an allocated individual initiative and/or in the aggregate. Think of risk appetite as the amount that an organization actively ventures in pursuit of rewards—also known as its goals and objectives. Risk tolerance is the amount of uncertainty an organization is prepared to accept in total or more narrowly within a certain business unit, a particular risk category or for a specific initiative. Expressed in quantitative terms that can be monitored, risk tolerance often is communicated in terms of acceptable or unacceptable outcomes or as limited levels of risk. Risk tolerance statements identify the specific minimum and maximum levels beyond which the organization is unwilling to lose. The range of deviation within the expressed boundaries would be bearable. However, exceeding the organization's established risk tolerance level not only may imperil its overall strategy and objectives, in the aggregate doing so may threaten its very survival. This can be due to the consequences in terms of cost, disruption to objectives or in reputation impact. Risk appetite and tolerance are generally set by the board and/or executive management and are linked with the company's strategy. They capture the organizational philosophy desired by the board for managing and taking risks, help frame and define the organization's expected risk culture and guide overall resource allocation. Risk culture consists of the norms and traditions of behavior of individuals and of groups within an organization that determine the way in which they identify, understand, discuss and act on the risk the organization confronts and takes. Organizations get in trouble when individuals, knowingly or unknowingly, act outside of the expected risk culture, or when the expected risk culture either is not well understood or enforced.

QUESTION 15 When anonymizing data sets for testing, it is important t do which of the following? A. Ensure all information is a sequence of random characters B. Preserve links between records in different tables C. Conduct a Vulnerability Assessment (VA) D. Understand the transferred risk

Correct Answer: B Explanation/Reference: Sometimes you just need some data to test and stress things. But randomly generated data is awful it doesn't have realistic distributions, and it isn't easy to understand whether your results are meaningful and correct. Real or quasi-real data is best. Whether you're looking for a couple of megabytes or many terabytes, the following sources of data might help you benchmark and test under more realistic conditions. The venerable sakila test database: small, fake database of movies. The employees test database: small, fake database of employees. The Wikipedia page-view statistics database: large, real website traffic data. The IMDB database: moderately large, real database of movies. The FlightStats database: flight on-time arrival data, easy to import into MySQL. The Bureau of Transportation Statistics: airline on-time data, downloadable in customizable ways. The airline on-time performance and causes of delays data from data.gov: ditto. The statistical review of world energy from British Petroleum: real data about our energy usage. The Amazon AWS Public Data Sets: a large variety of data such as the mapping of the Human Genome and the US Census data. The Weather Underground weather data: customize and download as CSV files.

QUESTION 26 Regression bugs can BEST be identified through which of the following? A. End-user acceptance testing B. Repeatable structured testing C. Repeatable simulation testing D. Integration testing

Correct Answer: B Explanation/Reference: Structured testing is the most thorough, but the most time-consuming testing. Structural testing, also known as glass box testing or white box testing is an approach where the tests are derived from the knowledge of the software's structure or internal implementation. The other names of structural testing includes clear boxtesting, open box testing, logic driven testing or path driven testing. What is Regression testing in software? When any modification or changes are done to the application or even when any small change is done to the code then it can bring unexpected issues. Along with the new changes it becomes very important to test whether the existing functionality is intact or not. This can be achieved by doing the regression testing. The purpose of the regression testing is to find the bugs which may get introduced accidentally because of the new changes or modification. During confirmation testing the defect got fixed and that part of the application started working as intended. But there might be a possibility that the fix may have introduced or uncovered a different defect elsewhere in the software. The way to detect these 'unexpected side-effects' of fixes is to do regression testing. This also ensures that the bugs found earlier are NOT creatable. Usually the regression testing is done by automation tools because in order to fix the defect the same test is carried out again and again and it will be very tedious and time consuming to do it manually. During regression testing the test cases are prioritized depending upon the changes done to the feature or module in the application. The feature or module where the changes or modification is done that entire feature is taken into priority for testing. This testing becomes very important when there are continuous modifications or enhancements done in the application or product. These changes or enhancements should NOT introduce new issues in the existing tested code. This helps in maintaining the quality of the product along with the new changes in the application. Example: Let's assume that there is an application which maintains the details of all the students in school. This application has four buttons Add, Save, Delete and Refresh. All the buttons functionalities are working as expected. Recently a new button 'Update' is added in the application. This 'Update' button functionality is tested and confirmed that it's working as expected. But at the same time it becomes very important to know that the introduction of this new button should not impact the other existing buttons functionality. Along with the 'Update' button all the other buttons functionality are tested in order to find any new issues in the existing code. This process is known as regression testing. Types of Regression testing techniques: We have four types of regression testing techniques. They are as follows: 1) Corrective Regression Testing: Corrective regression testing can be used when there is no change in the specifications and test cases can be reused. 2) Progressive Regression Testing: Progressive regression testing is used when the modifications are done in the specifications and new test cases are designed. 3) Retest-All Strategy: The retest-all strategy is very tedious and time consuming because here we reuse all test which results in the execution of unnecessary test cases. When any small modification or change is done to the application then this strategy is not useful. 4) Selective Strategy: In selective strategy we use a subset of the existing test cases to cut down the retesting effort and cost. If any changes are done to the program entities, e.g. functions, variables etc., then a test unit must be rerun. Here the difficult part is to find out the dependencies between a test case and the program entities it covers. When to use it: Regression testing is used when: Any new feature is added Any enhancement is done Any bug is fixed Any performance related issue is fixed Advantages of Regression testing: It helps us to make sure that any changes like bug fixes or any enhancements to the module or application have not impacted the existing tested code. It ensures that the bugs found earlier are NOT creatable. Regression testing can be done by using the automation tools It helps in improving the quality of the product. Disadvantages of Regression testing: If regression testing is done without using automated tools then it can be very tedious and time consuming because here we execute the same set of test cases again and again. Regression test is required even when a very small change is done in the code because this small modification can bring unexpected issues in the existing functionality.

QUESTION 87 Which of the following BEST describes an evaluation of a software product using Common Criteria (CC) methodologies? A. Assessment of a target environment against defined criteria B. Assessment of s Target of Evaluation (TOE) against defined criteria C. Evaluation of a Security Target (ST) defined by a reference monitor D. Evaluation of a Protection Profile (PP) defined by a reference monitor

Correct Answer: B Explanation/Reference: The Common Criteria use specific terminology to describe activity associated with the framework. The Target of Evaluation (TOE) is the product or system that is being evaluated. The Security Target (ST) is the security properties associated with a TOE. The Protection Profile (PP) is a set of security requirements associated with a class of products, i.e., firewalls have PPs and operating systems have PPs, but these may differ. PPs help streamline the comparison of products within product classes. The output of the Common Criteria process is an Evaluation Assurance Level (EAL), a set of seven levels, from 1, the most basic, through 7, the most comprehensive. The higher the EAL value, the higher the degree of assurance that a TOE meets the claims. Higher EAL does not indicate greater security.

QUESTION 73 Which of the following is used to certify the assurance level of a software application? A. Open Web Application Security Project (OWASP) B. Common Criteria (CC) C. Trusted Computing Base (TCB) D. Capability Maturity Model Integration (CMMI)

Correct Answer: B Explanation/Reference: The Evaluation Assurance Level (EAL1 through EAL7) of an IT product or system is a numerical grade assigned following the completion of a Common Criteria security evaluation, an international standard in effect since 1999. ... To achieve a particular EAL, the computer system must meet specific assurance requirements.

QUESTION 24 Which of the following BEST assures data integrity when crossing trust boundaries in an Internet-based distributed processing system? A. Encrypt the message using Advanced Encryption Standard (AES) 256 B. Use Transport Layer Security (TLS) to secure the communication C. Use a Web Application Firewall (WAF) to validate the input D. Use Web Services Security (WSS) in the application layer

Correct Answer: B Explanation/Reference: The TLS protocol provides communications security over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery.

QUESTION 102 A web application has server side logging of anonymous user activity containing secure one-way hashes of each client Internet Protocol (IP)address. The has is salted with a 256-byte cryptographic random number that changes daily and is not stored on disk. Which of the following is the GREATEST privacy concern? A. The IP address can be discovered from the hash using a brute force attack B. The hash can be used to correlate anonymous user actions C. Anonymous user actions should not be logged D. There is no privacy concern because the IP address cannot be revealed from the secure hash

Correct Answer: B Explanation/Reference: The hash changes daily, so within the day, the hash would match individual users.

QUESTION 17 Which of the following is the PRIMARY goal of incident handling in software operations? A. Retrieve evidence B. Contain and repair damage C. Review and update the firewall rules D. Establish the communication protocol

Correct Answer: B Explanation/Reference: The primary goal of the Incident Management process is to restore normal service operation as quickly as possible and minimize the adverse impact on business operations, thus ensuring that the best possible levels of service quality and availability are maintained.

QUESTION 34 An organization wants to provide Internet web access to a legacy database application hosted on their internal network. Which of the following would be the BEST approach to implement the portal? A. Implement a web portal on the internal network with the legacy application B. Implement a web portal with a web server on the Demilitarized Zone (DMZ) network that is proxied to the legacy application C. Implement a web portal on the DMZ network that is connected to a stagging server on the internal network D. Implement a web portal on the DMZ network and move the legacy application to the DMZ network

Correct Answer: B Explanation/Reference: The web portal should be on the DMZ and the database application on the internal network. Proxying the connection provides isolation and security.

QUESTION 6 What is the BEST step to take in order to minimize risk when introducing a critical vendor security patch into the production environment? A. Ask the developer to test the patch in the development environment first B. Gain management support for installation directly into production since the patch is critical C. Follow the change management process for critical patches D. Contact he system administrator and request immediate installation production

Correct Answer: C Explanation/Reference: Test all patches before deploying them. Patches should be tested in the testing, and not development environment, before being deployed to the production environment.

QUESTION 46 Which of the following is required when implementing the Application Programming Interface (API) for trusted application at the Trusted Computing Base (TCB) boundary? A. Perform bounds checking on all return arguments B. Verify caller access permission on all input arguments C. Caller validates all input arguments D. Implement fault handlers for all fatal faults caused by API violations

Correct Answer: B Explanation/Reference: Trusted Computing Base The term trusted computing base (TCB) is used to describe the combination of hardware and software components that are employed to ensure security. The Orange Book, which is part of the U.S. government's Trusted Computer System Evaluation Criteria (TCSEC), provides a formal definition for TCB: The totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy. The trusted computing base should not be confused with trustworthy computing or trusted computing. Trusted computing base is an idea that predates both of these other terms and goes to the core of how a computer functions. The kernel and reference monitor functions are part of the TCB, as these elements are the instantiation of the security policy at the device level. Functions that operate above the TCB level—applications, for instance—are not part of the TCB and can become compromised, but without affecting the TCB level.

QUESTION 59 Which of the following controls will BEST prevent a system from accepting invalid data? A. Blacklisting input validation B. Whitelisting input validation C. Client-side input validation D. Open text input validation

Correct Answer: B Explanation/Reference: Whitelisting is stronger than blacklisting. Server-side input validation is stronger than client-side input validation.

QUESTION 12 What is the correct sequence of steps in the development of secure software A. Define security model, Define security requirements, analyze functional requirements B. Analyze functional requirements, Define security requirements, Define security model C. Define security requirements, Analyze functional requirements, Define security model D. Analyze functional requirements, Choose the development process, Define security model

Correct Answer: B Explanation/Reference: You have to know what you are securing before you can secure it.

QUESTION 3 Which of the following mitigates cryptographic keys from being stoles by cold-boot attacks? A. Use symmetric keys instead of asymmetric keys B. Overwrite the keys in memory once they are no longer needed C. Use passwords to derive the cryptographic keys D. Combine cryptographic keys with random salt values

Correct Answer: B Explanation/Reference: After a computer is powered off, the data in RAM disappears rapidly, but it can remain in RAM up to several minutes after shutdown. An attacker having access to a computer before it disappears completely could recover important data from your session. This can be achieved using a technique called cold boot attack . To prevent this attack, the data in RAM is overwritten by random data when shutting down. This erases all traces from your session on that computer.

QUESTION 78 A programmer who is implementing the principle of the least common mechanism while developing a module uses which of the following? A. Procedural methodologies and reusability techniques B. Global variables instead of local ones to implement the code C. Local variables instead of global ones to implement the code D. Only shared resources, such as files and variables

Correct Answer: C Explanation/Reference: The concept of least common mechanism refers to a design method designed to prevent the inadvertent sharing of information. Having multiple processes share common mechanisms leads to a potential information pathway between users or processes. Having a mechanism that services a wide range of users or processes places a more significant burden on the design of that process to keep all pathways separate. When presented with a choice between a single process that operates on a range of supervisory and subordinate-level objects and/or a specialized process tailored to each, choosing the separate process is the better choice. The concepts of least common mechanism and leveraging existing components can place a designer at a conflicting crossroad. One concept advocates reuse and the other separation. The choice is a case of determining the correct balance associated with the risk from each.

QUESTION 54 Build managers should maintain complete symbol files and environment variable lists for application binaries to support which of the following? A. Pre-release white-box testing and unit testing B. Pre-release incident tracking and static analysis C. Post-release version control and penetration testing D. Post-release crash analysis and vulnerability tracking

Correct Answer: C Explanation/Reference: A debug symbol is a special kind of symbol that attaches additional information to the symbol table of an object file, such as a shared library or an executable. This information allows a symbolic debugger to gain access to information from the source code of the binary, such as the names of identifiers, including variables and routines. The symbolic information may be compiled together with the module's binary file, or distributed in a separate file, or simply discarded during the compilation and/or linking. This information can be helpful while trying to investigate and fix a crashing application or any other fault. In computer software engineering, revision control is any kind of practice that tracks and provides control over changes to source code. Software developers sometimes use revision control software to maintain documentation and configuration files as well as source code. As team's design, develop and deploy software, it is common for multiple versions of the same software to be deployed in different sites and for the software's developers to be working simultaneously on updates. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk). At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used on many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers, and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This ensures that the majority of management of version control steps is hidden behind the scenes. Moreover, in software development, legal and business practice and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations. Revision control may also track changes to configuration files, such as those typically stored in /etc or /usr/local/ etc on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise.

QUESTION 71 What is the PRIMARY use of a proxy plug-in to browse a web application by a skilled tester? A. To ensure anonymity of the tester B. To catalog all web site content using an automated spider C. To modify requests to and responses from the web application D. To improve performance through the catching of web application traffic

Correct Answer: C Explanation/Reference: Angular's HTTP interceptors can be used to pre- and postprocess HTTP requests. Preprocessing happens before requests are executed. This can be used to change request configurations. Postprocessing happens once responses have been received. Responses can be transformed via postprocessing. Global error handling, authentication, loading animations and many more cross-cutting concerns can be implemented with HTTP interceptors.

QUESTION 57 Which of the following is the BEST example of a deliverable associated with an application security milestone? A. Interconnection Security Agreement (ISA) B. Integrated Baseline Review (IBR) C. Requirements Traceability Matrix (RTM) D. Key Performance Indicator (KPI)

Correct Answer: C Explanation/Reference: The Requirements Traceability Matrix (RTM) is a document that links requirements throughout the validation process. The purpose of the Requirements Traceability Matrix is to ensure that all requirements defined for a system are tested in the test protocols. Security Requirements Traceability Matrix (SRTM) is a grid that supplies documentation and a straightforward presentation of the required elements for security of a system. Integrated Baseline Review (IBR) An Integrated Baseline Review (IBR) is a joint assessment conducted by the government Program Manager (PM) and the contractor to establish a mutual understanding of the Performance Measurement Baseline (PMB).

QUESTION 38 A principle of software security is to configure all defaults to be which of the following? A. Redundant B. Impervious to change C. Fail-safe D. Role-based

Correct Answer: C Explanation/Reference: The eight software security design principles are: 1. Principle of Least Privilege A subject should be given only those privileges that it needs in order to complete its task. The function of a subject should control the assignment of rights, not the identity of the subject. This means that if your boss demands root access to a UNIX system that you administer, she should not be given that privilege unless she absolutely has a task that requires such level of access. If possible, the elevated rights of an identity individual should be removed as soon as those rights are no longer required. e.g. sudo su programs set uid only when needed 2. Principle of Fail-Safe Defaults Unless a subject is given explicit access to an object, it should be denied access to that object. This principle restricts how privileges are initialized when a subject or object is created. Basically, this principle is similar to the "Default Deny" principle that we talked about in the 6 dumbest ideas in computer security. Whenever access, privilege, or some other security related attribute is not granted, that attribute should be denied by default. 3. Principle of Economy of Mechanism Security mechanisms should be as simple as possible. This principle simplifies the design and implementation of security mechanisms. If the design and implementation are simple, fewer possibilities exist for errors. The checking and testing process is less complex. Interfaces between security modules are suspect area and should be as simple as possible. 4. Principle of Complete Mediation All accesses to objects should be checked to ensure that they are allowed. This principle restricts the caching of information, which often leads to simpler implementations of mechanisms. Every time that someone tries to access an object, the system should authenticate the privileges associated with that subject. What happens in most systems is that those privileges are cached away for later use. The subject's privileges are authenticated once at the initial access. For subsequent accesses the system assumes that the same privileges are enforce for that subject and object. This may or may not be the case. The operating system should mediate all and every access to an object. e.g. DNS information is cached What if it is poisoned? 5. Principle of Open Design The security of a mechanism should not depend on the secrecy of its design or implementation. This principle suggests that complexity does not add security. This concept captures the term "security through obscurity". This principle not only applies to cryptographic systems but also to other computer security related systems. e.g. DVD player & Content Scrambling System (CSS) protection 6. Principle of Separation of Privilege A system should not grant permission based on a single condition. This principle is restrictive because it limits access to system entities. The principle is similar to the separation of duty principle that we talked about in the integrity security policy unit. Thus before privilege is granted two or more checks should be performed. e.g. to su (change) to root two conditions must be met 1. the user must know the root password 2. the user must be in the right group (wheel) 7. Principle of Least Common Mechanism Mechanisms used to access resources should not be shared. This principle is also restrictive because it limits sharing of resources. Sharing resources provides a channel along which information can be transmitted. Hence, sharing should be minimized as much as possible. If the operating system provides support for virtual machines, the operating system will enforce this privilege automatically to some degree. 8. Principle of Psychological Acceptability Security mechanisms should not make the resource more difficult to access than if the security mechanism were not present. Do you believe this? This principle recognizes the human element in computer security. If security-related software or systems are too complicated to configure, maintain, or operate, the user will not employ the requisite security mechanisms. For example, if a password is rejected during a password change process, the password changing program should state why it was rejected rather than giving a cryptic error message. At the same time, programs should not impart unnecessary information that may lead to a compromise in security. In practice, the principle of psychological acceptability is interpreted to mean that the security mechanism may add some extra burden, but that burden must be both minimal and reasonable. e.g. When you enter a wrong password, the system should only tell you that the user id or password was wrong. It should not tell you that only the password was wrong as this gives the attacker information.

QUESTION 55 When an application is loaded for execution on a workstation, which of the following provides the MOST assurance that the application is unaltered by an attack? A. The application file is disrupted with a Cyclic Redundancy Check (CRC) by the manufacturer, and the CRC is validated at runtime B. The application file is signed with the manufacturers private key, and the signature is validated during installation C. The application file is signed with the manufacturers private key, and the signature is validated at runtime D. The application file is distributed with a CRC by the manufacturer, and the CRC is validated during installation

Correct Answer: C Explanation/Reference: Authorization based on signatures can be performed: When code is installed by the provisioning system For several previous releases and in the new P2 provisioning framework, Eclipse has the ability to check signatures as bundles are provisioned into the system. As the provisioning system encounters bundles, it automatically performs authentication of the code signer and will prompt if a signer is not trusted according to the system configuration. The end user will be presented with a list of untrusted signers, and choosing to trust will allow the bundles to be installed into the platform. When code is loaded by the runtime Since 3.4, the Equinox runtime has had the ability to check the signature of code as it is loaded. The benefit to this feature beyond checking signatures during provisioning is the ability to dynamically remove trust and disable code should an exploit be exposed in deployed code. In order to enable signature-based authorization at load time, the following VM argument must be passed: -Dosgi.signedcontent.support=authority See the runtime options page for more information about the osgi.signedcontent.support runtime variable.

QUESTION 47 The metric of security defects per Thousands of Lines Code (KLOC) is used in the development phase of the Software Development Life Cycle (SDLC) to do which of the following? A. Certify a code block as complete B. Choose the best programming language for project C. Provide feedback on the quality of the programmer's output D. Identify the design defects in the code

Correct Answer: C Explanation/Reference: Bugs per lines of code The book "Code Complete" by Steve McConnell has a brief section about error expectations. He basically says that the range of possibilities can be as follows: (a) Industry Average: "about 15 - 50 errors per 1000 lines of delivered code." He further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques. (b) Microsoft Applications: "about 10 - 20 defects per 1000 lines of code during in-house testing, and 0.5 defect per KLOC (KLOC IS CALLED AS 1000 lines of code) in released product (Moore 1992)." He attributes this to a combination of code-reading techniques and independent testing (discussed further in another chapter of his book). (c) "Harlan Mills pioneered 'cleanroom development', a technique that has been able to achieve rates as low as 3 defects per 1000 lines of code during in-house testing and 0.1 defect per 1000 lines of code in released product (Cobb and Mills 1990). A few projects - for example, the space-shuttle software - have achieved a level of 0 defects in 500,000 lines of code using a system of format development methods, peer reviews, and statistical testing."

QUESTION 72 What is the MOST important security requirement when dealing with the security of test data? A. Data type validation B. Data classification C. Data handling procedure D. Data backup and restoration

Correct Answer: C Explanation/Reference: Comparing enterprise data anonymization techniques. There comes a time when data needs to be shared -- whether to evaluate a matter for research purposes, to test the functionality of a new application, or for an infinite number of other business purposes. To protect sensitivity or confidentiality of shared data, it often needs to be sanitized before it can be distributed and analyzed. There are a number of data anonymization techniques that can be used, including data encryption, substitution, shuffling, number and date variance, and nulling out specific fields or data sets. A popular and effective method for sanitizing data is called data anonymization. Also known as data masking, data cleansing, data obfuscation or data scrambling, data anonymization is the process of replacing the contents of identifiable fields (such as IP addresses, usernames, Social Security numbers and zip codes) in a database so records cannot be associated with a specific individual, project or company. Unlike the concept of confidentiality, which often means the subjects' identities are known but will be protected by the person evaluating the data, in anonymization, the evaluator does not know the subjects' identities. Thus, the anonymization process allows for the dissemination of detailed data, which permits usage by various entities while providing some level of privacy for sensitive information. Data anonymization techniques

QUESTION 80 One organization is sharing confidential data with a partner. Which of the following can be used to restrict the partner from sharing the data with a third party? A. Shared key encryption B. Data Loss Prevention (DLP) C. Digital Rights Management (DRM) D. Public Key Infrastructure (PKI) encryption

Correct Answer: C Explanation/Reference: Digital rights management (DRM) is a systematic approach to copyright protection for digital media. The purpose of DRM is to prevent unauthorized redistribution of digital media and restrict the ways consumers can copy content they've purchased.

QUESTION 10 Which of the following contributes to MOST vulnerabilities in an application interface that is developed to integrate a new external system with an existing Internal system? A. Resource contention between the internal and external systems B. Black-box and white-box testing of the application interface C. Uncertain security characteristic of the external system D. Malicious logic included into the application interface

Correct Answer: C Explanation/Reference: How secure is the external system? The security logic of the application interface is more likely to be weak than purposely malicious.

QUESTION 19 When working to integrate a newly developed application into an organization's Business Continuity Plan (BCP), a product owner MUST first consider which of the following elements of the application? A. Critical recovery time period B. Impact to the organization's Disaster Recovery Plan (DRP) C. Relative organizational importance D. High availability (HA) requirements

Correct Answer: C Explanation/Reference: There are four primary purposes of the business impact analysis: Obtain an understanding of the organization's most critical objectives, the priority of each, and the timeframe for resumption of these following an unscheduled interruption. Inform a management decision on Maximum Tolerable Outage (MTO) for each function. Provide the resource information from which an appropriate recovery strategy can be determined/ recommended. Outline dependencies that exist both internally and externally to achieve critical objectives.

QUESTION 67 Which of the following is the PRIMARY purpose of using risk exposure statements? A. Eliminate risks B. Transfer risks C. Mitigate risks D. Accept risks

Correct Answer: C Explanation/Reference: Risk exposure can also be called Risk Priority - The priority of a risk helps to determine the amount of resources and time that should be dedicated to managing and monitoring the risk - Very Low, Low, Medium, High, and Very High priority is assessed by using probability and impact scores - The potential timing of a risk event may also be considered when determining risk management actions. When considering loss probability, businesses usually divide risk into two categories: pure risk and speculative risk. Pure risks are categories of risk that are beyond anyone's control, such as natural disasters or untimely death. Speculative risks can be taken on voluntarily. Types of speculative risk include financial investments or any activities that will result in either a profit or a loss for the business. Speculative risks carry an uncertain outcome. Potential losses incurred by speculative risks could stem from business liability issues, property loss, property damage, strained customer relations and increased overhead expenses. To calculate risk exposure, variables are determined to calculate the probability of the risk occurring. These are then multiplied by the total potential loss of the risk. To determine the variables, organizations must know the total loss in dollars that might occur, as well as a percentage depicting the probability of the risk occurring. The objective of the risk exposure calculation is to determine the overall level of risk that the organization can tolerate for the given situation, based on the benefits and costs involved. The level of risk an organization is prepared to accept is called its risk appetite.

QUESTION 32 An individual performing a post-implementation assessment for an Internet-facing software development finds that an organization is relying only on operational controls at the network, Operating System (OS), web server, and database levels: Which of the following is a critical shortcoming of this approach? A. The organization did not anticipate all possible vulnerabilities, and am attacker only needs to find one in order to initiate an attack B. The application's developers made an incorrect assumption that software-based controls are less susceptible to misconfiguration and misapplication C. The application's security is dependent on the robustness of external controls that surround it D. The application's developers made incorrect assumptions about capabilities and security states of the software execution environment

Correct Answer: C Explanation/Reference: Security features are elements of a program specifically designed to provide some aspect of security. Adding encryption, authentication, or other security features can improve the usability of software and thus are commonly sought-after elements to a program. This is not what secure development is about. Secure software development is about ensuring that all elements of a software package are constructed in a fashion where they operate securely. Another way of looking at this is the idea that secure software does what it is designed to do and only what it is designed to do. Adding security features may make software more marketable from a features perspective, but this is not developing secure software. If the software is not developed to be secure, then even the security features cannot be relied upon to function as desired.

QUESTION 93 Which of the following is the MOST important role of a security professional involved in the software acquisition process? A. To support the validation of a secure design B. To implement the preset security settings C. To specify the security requirements D. To help designs due diligence questionnaires

Correct Answer: C Explanation/Reference: Software Assurance (SwA) is the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software throughout the life cycle. Steps organizations can take now to support software security assurance. Tips from white paper on "7 Practical Steps to Delivering More Secure Software": 1. Quickly evaluate current state of software security and create a plan for dealing with it throughout the life cycle. 2. Specify the risks and threats to the software so they can be eliminated before they are deployed. 3. Review the code for security vulnerabilities introduced during development. 4. Test and verify the code for vulnerabilities. 5. Build a gate to prevent applications with vulnerabilities from going into production. 6. Measure the success of the security plan so that the process can be continually improved. 7. Educate stakeholders about security so they can implement the security plan. Any development organization can implement this security plan immediately and begin to receive a return on their efforts within a minimal period of time. The key is to start now. To complement the software strategy, there are several other areas of good security practices to observe and implement if they are not already part of the organizational security approach: 1. Implement software configurations such as the U.S. Government Configuration Baseline (formerly the Federal Desktop Core Configuration), strong authentication, and strict, documented internal policies and procedures. 2. Ask vendors to provide guarantees of software security as required by HR 6523. 3. Insert and enforce software assurance requirements in contracts. 4. Review IT security policies to ensure that all users of organizational networks and data comply with the strictest security policies possible with respect to the mission. 5. Determine how much risk the organization can afford and who is accountable for that risk. Constructing a new building in parts of California without accounting for earthquakes is unacceptable.

QUESTION 90 Source code escrow agents help to recover from which of the following adverse events? A. Malware destroys customer data B. License switches supplier C. Licensor goes out of business D. Malware modified customer data

Correct Answer: C Explanation/Reference: Source code escrow is the deposit of the source code of software with a third party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning.

QUESTION 68 Which of the following BEST meets an organization requirement of an e-commerce site with 99.9% availability? A. Support for help desk operatives B. Offsite backups C. System redundancy D. Up-to-date documentation

Correct Answer: C Explanation/Reference: System redundancy would yield the best availability

QUESTION 86 A web application is found to only be vulnerable to a Structured Query Language (SQL) injection attack, even though the application developers coded the application to only use stored procedures for its database access. Which additional step should the developer take to mitigate this attack? A. Rewrite the application in a managed language B. Apply Hypertext Markup Language (HTML) encoding to the page output C. Require Transport Layer Security (TLS) connections to the database D. Deny privileges on the database views and tables

Correct Answer: C Explanation/Reference: TLS You can use SSL/TLS from your application to encrypt a connection to a DB instance running MySQL, MariaDB, Amazon Aurora, SQL Server, Oracle, or PostgreSQL. Each DB engine has its own process for implementing SSL/TLS. To learn how to implement SSL for your DB instance, use the link following that corresponds to your DB engine:

QUESTION 14 Which cryptographic mechanisms are MOST effective for a code signing mechanism? A. Asymmetric keys and elliptical curve cryptography B. Symmetric keys and ActiveX control integration C. Asymmetric keys and third-party certificate validation D. Symmetric keys and a cryptographically strong hashing algorithm

Correct Answer: C Explanation/Reference: Third-party certificates are universally trusted.

QUESTION 29 Which of the following BEST prevents cryptographic flaws in web application? A. Using an automated scanning tool to verify cryptographic storage B. Examining system configuration files C. Conducting a manual code review of the application D. Examining cryptographic algorithms

Correct Answer: C Explanation/Reference: Verifying Security. The goal is to verify that the application properly encrypts sensitive information in storage. Automated approaches: Vulnerability scanning tools cannot verify cryptographic storage at all. Code scanning tools can detect use of known cryptographic APIs, but cannot detect if it is being used properly or if the encryption is performed in an external component. Manual approaches: Like scanning, testing cannot verify cryptographic storage. Code review is the best way to verify that an application encrypts sensitive data and has properly implemented the mechanism and key management. This may involve the examination of the configuration of external systems in some cases.

QUESTION 20 Exploiting an authorization vulnerability to gain privileges of peer user who has equal or lesser privileges within an application is BEST described as A. vertical privilege escalation B. session hijacking C. horizontal privilege escalation D. account hijacking

Correct Answer: C Explanation/Reference: When an accessor escalates an account privilege on a system for a particular targeted account & is able to access extra resources which by all means is subject to gaining authorization or authentication from one account to another higher level authority account such as from guest account to administrative account, such an privilege escalation is known to be a Vertical Privilege Escalation. On the contrary, any escalated privileges which gives an accessor with the same privileges, for an example, a user account compromised & with the help of such user compromise to escalate accessors privileges to another secondary same authority account (NT- System on Windows, etc) - it's known to be a Horizontal Privilege Escalation

QUESTION 21 System calls and the values of their arguments can be determined at run-time using which of the following? A. Compiler B. Fuzzer C. Preprocessor D. Debugger

Correct Answer: D Explanation/Reference: A debugger or debugging tool is a computer program that is used to test and debug other programs (the "target" program). The code to be examined might alternatively be running on an instruction set simulator (ISS), a technique that allows great power in its ability to halt when specific conditions are encountered, but which will typically be somewhat slower than executing the code directly on the appropriate (or the same) processor. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact. A "trap" occurs when the program cannot normally continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory. When the program "traps" or reaches a preset condition, the debugger typically shows the location in the original code if it is a source-level debugger or symbolic debugger, commonly now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly (unless it also has online access to the original source code and can display the appropriate section of code from the assembly or compilation).

QUESTION 64 Which of the following types of analysis is used to document the services and functions that have been accidentally left out, deliberately eliminated, or still needs to be developed? A. Vulnerability B. Cost-benefit C. Test coverage D. Gap

Correct Answer: D Explanation/Reference: A gap analysis is a method of assessing the differences in performance between a business' information systems or software applications to determine whether business requirements are being met and, if not, what steps should be taken to ensure they are met successfully. Gap refers to the space between "where we are" (the present state) and "where we want to be" (the target state). A gap analysis may also be referred to as a needs analysis, needs assessment or need-gap analysis.

QUESTION 25 Where is the MOST secure place to store a secret key needed by an application? A. In a Certificate Authority (CA) B. In the source code C. In a separate file on the server D. In a hardware security module

Correct Answer: D Explanation/Reference: A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. HSMs may possess controls that provide tamper evidence such as logging and alerting and tamper resistance such as deleting keys upon tamper detection.[1] Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing. Many HSM systems have means to securely back up the keys they handle either in a wrapped form via the computer's operating system or externally using a smartcard or some other security token.[2] Because HSMs are often part of a mission-critical infrastructure such as a public key infrastructure or online banking application, HSMs can typically be clustered for high availability. Some HSMs feature dual power supplies and field replaceable components such as cooling fans to conform to the high-availability requirements of data center environments and to enable business continuity. A few of the HSMs available in the market have the ability to execute specially developed modules within the HSM's secure enclosure. Such an ability is useful, for example, in cases where special algorithms or business logic has to be executed in a secured and controlled environment. The modules can be developed in native C language, in .NET, Java, or other programming languages. While providing the benefit of securing application- specific code, these execution engines protect the status of an HSM's FIPS or Common Criteria validation.

QUESTION 49 An organization has many web services that are exposed internally within its perimeter but is now preparing to expose those services outside to its business partners. Which of the following is the BEST security defense-in-depth mechanism to consider before exposing services to external entities? A. Implement Network Access Control (NAC) B. Hire a network engineer to monitor web services traffic C. Implement Trusted Platform Module (TPM) internally on clients D. Introduce a service intermediary to inspect message level traffic and perform input validation

Correct Answer: D Explanation/Reference: A service level intermediary such a CASB would provide defense in depth. According to Gartner, a cloud access security broker (CASB) is an on-premises or cloud-based security policy enforcement point that is placed between cloud service consumers and cloud service providers to combine and interject enterprise security policies as cloud-based resources are accessed. Organizations are increasingly turning to CASB to address cloud service risks, enforce security policies, and comply with regulations, even when cloud services are beyond their perimeter and out of their direct control. If you intend to use CASB to increase your confidence about your organization's cloud service usage, consider taking a granular approach to policy enforcement and data protection. In other words, consider using a scalpel rather than a sledgehammer for your cloud security.

QUESTION 28 Application unintentionally leaking information about their configuration of internal workings is MOST likely a result of which of the following? A. Insecure direct object reference B. Broken session management C. Insecure communications D. Improper error handling

Correct Answer: D Explanation/Reference: Improper error and exception management leads to banner grabbing. Banners are the welcome screens that divulge software version numbers and other system information on network hosts. This banner information might give a hacker the leg up because it may identify the operating system, the version number, and the specific service packs to give the bad guys a leg up on attacking the network. You can grab banners by using good old telnet or tools such as Nmap and SuperScan. Information leakage and improper error handling happen when web applications do not limit the amount of information they return to their users. A classic example of improper error handling is when an application doesn't sanitize SQL error messages that are returned to the user. Upon receiving a SQL error message an attacker will immediately identify a place for identifying injection flaws. Although preventing error messages from reaching users will not prevent vulnerabilities from occurring, it does make it difficult for an attacker to accomplish his goal and it is also an industry best practice.


Related study sets

Mental Health - Chapter 11 Childhood and Neurodevelopmental Disorders

View Set

prepU ch 39 Assessment of Musculoskeletal Function

View Set

Biology 1013 Launchpad Milestone 2 Quiz

View Set

The Basic Neuron and axon terminal Functions

View Set

AF Contracting Officer Study Guide (unofficial)

View Set

Chapter 13- Investing in Bonds and Other Alternatives

View Set

Modern World Beginnings (Part A) The French Revolution Begins

View Set

Milady's Standard Professional Barbering Chapter 16- Men's Hair Replacement

View Set