CySA+ Chapter 14: Secure Software Developent

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

Software Development must meet 3 types of requirements. What are they?

1. Functional Requirements, 2. Non-Functional Requirements, and 3. Security Requirements

Name 10 methods of secure coding

1. Input validation, 2. Parameter validation, 3. Static code analysis, 4. Code Reviews, 5. Regression testing, 6. Security testing, 7. Web app vulnerability scanning, 8. Interception proxies, 9. Fuzz testing, and 10. Stress testing.

Who are the top 3 best known advocates for secure coding?

1. The SEI, 2. OWASP, and 3. The SANS Institute

What are the security requirements of software?

A security requirement defines the behaviors and characteristics a system must possess in order to achieve and maintain an acceptable level of security by itself, and in its interactions with other systems. Accordingly, this class includes both functional and nonfunctional aspects of the finished product.

How can interception proxies be used in secure coding?

An interception proxy is a software tool that is inserted between two communicating end-points for the purpose of examining, modifying, or logging messages between the two. Typically, an interception proxy will be in the same network (or even host) as one of the endpoints, which is usually the client. In the context of security testing, these proxies are most commonly used to examine the security of web apps and mobile apps, because they allow the security tester to inspect every message between the client and the server. Here is a list of security characteristics that can be tested using an interception proxy. 1) Input validation: although input validation can be tested through the standard user interface, it's usually easier to do formal (or scripted) testing using one environment, which makes the use of a proxy more efficient. 2) Parameter validation: By modifying values that should not be available to the user (for example, hidden form fields or cookies) or were validated on the client side, the security team can verify that server-side validation is taking place. 3) Plaintext credentials: Although it is relatively easy to ensure that web apps use HTTPS for sending credentials, this is a lot harder when dealing with mobile apps unless you can intercept the traffic. This could also be done with a packet sniffer. 4) Session tokens: Sessions are normally tracked through the use of token, which is a value assigned by the server and oftentimes called a session ID. If this value is not truly random, threat actors could guess valid session IDs and use them to impersonate legitimate users. Proxies facilitate statistical analysis of session tokens to ensure they are sufficiently random.

What is the SEI's CMMI?

Another of SEI's contributions is the development of the "Capability Maturity Model Integration (CMMI)," which we discussed in chapter 11. Although CCMI is aimed at process improvement in general, there is a specific model called "CMMI for Development (or CMMI-DEV) that applies to the development services and products such as software. Moreover, a guide called "Security by Design" specifies 4 process areas for CMMI-DEV that allow organizations to improve and appraise their capabilities to develop products with adequate levels of security. This guide is intended to help organizations build security into their products early in the lifecycle to avoid the common trap of trying to bolt security onto products at the later stages of their development. The 4 process areas are listed here: a) Organizational Preparedness for Secure Development: Focuses on the development and maintenance of the organizational capabilities required for secure development, such as developing the workforce and acquiring the necessary tools b) Security Management in Projects: Expands the organization's project management processes to include the definition, planning, and integration of security-related practices c) Security Requirements and Technical Solution: Establishes security requirements for the products as well as practices for the secure architecting, design, and implementation d) Security Verification and Validation: Builds upon the practices of verification and validation to ensure that the security requirements are met and that the product is adequately resistant to malicious attacks in its intended environment.

What is Stress testing?

Another type of testing that also attempts to break software systems does so by creating conditions that the system would not reasonably be expected to encounter during normal conditions. "Stress Testing" places extreme demands that are well beyond the planning thresholds of the software in order to determine how robust it is. The focus here is on attempting to compromise the availability of the system by creating a DoS condition. The most common type of stres testing attempts to give the system 'too much of' something (e.g., simultaneous connections or data). During development, the team will build the software so that is handles a certain volume of activity or data. This volume may be specified as a nonfunctional requirement, or it may be arbitrarily determined by the team based on their experience. Typically, this value is determined by measuring or predicting the maximum load that the system is likely to be presented. In order to stress-test the system with regard to this value, the team would simply exceed it under different conditions and see what happens. The most common way to conduct these tests is by using scripts that generate thousands of simulated connections or by uploading exceptionally high volumes of data (either as many large files or fewer huge ones). Not all stress-tests are about overwhelming the software; it is also possible to underwhelm it. This type of stress testing provides the system with "too little" of something (e.g, network bandwidth, CPU cycles, or memory). The idea here is to see how the system deals with a threat called "resource starvation," in which an attacker intentionally causes the system to consume resources until non are left. A robust system would gracefully degrade its capabilities during an event like this, but wouldn't fail altogether. Insufficient resources tests are also useful to determine the absolute minimum configuration necessary for nominal system performance. A resource starvation attack attempts to compromise the availability of information system by depleting the resources required for them to operate. a) Network bandwidth: This variety is perhaps the best known because of DDoS attack that drown a target with billions of packets per second b) Memory : Depleting a system's memory is easy if the system has memory leaks, which are memory allocations that are not eventually reclaimed by the system. This is also possible if you can cause the program to spawn an endless number of recursive procedure calls or if there is no limit to how many items you can add to an online shopping cart c) CPU: CPU starvation attacks are normally harder to pull off because you need to exploit a flaw in the system that ties up the CPU for an extended period. For example, certain asymmetric cryptography procedures, such as key pair generation, are CPU intensive. If an attacker can cause the system to generate a large number of key pairs, it would eventually not be able to perform any other function.

What is Security Testing?

As the project transitions from development to implementation, the IT operations and security teams typically perform additional security tests to ensure the CIA not only of the new software, but of the larger ecosystem once the new program is introduced. If an org. is using DevSecOps, some or most of these security tests could be performed as the software is being developed because security personnel would be part of that phase as well. Otherwise, the development team gives the software to the security team for testing, and these individuals will almost certainly find the flaws that will start a back-and-forth cycle that could delay final implementations.

Describe the O&M phase of the SDLC

By most estimates, operation and maintenance (O&M) of software systems represents around 75 percent of the Total Cost of Ownership (TCO). Somewhere between 20 and 35 percent of O&M costs are related to correcting vulnerabilities and other flaws that were not discovered during the development. If you multiply these two figures together, you can see that organizations typically spend between 15 and 26 percent of the TCO for a software system fixing defects. This is the main driver for spending extra time in the design, secure development, and testing of the system before it goes into O&M. By this phase, the IT operations team has ownership of software and is trying to keep it running in support of the business, while the software developers have usually moved on to the next project and see requests for fixes as distractions from their main efforts. This should highlight, once again, the need for secure software development before it ever touches a production network.

Input validation can be implemented through client-side validation. What is that?

Client-side validation is often implemented through JavaScript and embedded within the code for the page containing the form. The advantage of this approach is that errors are caught at the point of entry, and the form is not submitted to the server until all values are validated. The disadvantage is that, as you will see later when we discuss interception proxies, client-side validation is easily negated using commonly available and easy-to use tools. And, users can disabled JavaScript anyway! The preferred approach is to do client-side validation to enhance the user experience of benign users, but double-check everything on the server side to ensure protection against malicious actors

What is "User Acceptance Training" used for?

Every software system is built to satisfy the needs of a set of users. Accordingly, the system is not deemed acceptable (or finished) until the users or their representatives declare that all the features have been implemented in ways that are acceptable to them. Depending on the development methodology used, user acceptance testing could happen before the end of the development phase of before the end of the implementation phase. Many organizations today use agile development methodologies that stress user involvement during the development process. This means that user acceptance testing may not be a formal event but rather a continuous engagement.

What are functional requirements?

Functional Requirements define the function of a system in terms of inputs, processing, and outputs. For example, a software system may receive telemetry data from a temperature sensor, compare it to other data from that sensor, and display a graph showing how the values have changed for the day. This requirement is not encumbered with any specific constraints or limitations, which is the role of nonfunctional requirements Functional = What the software performs

What is Fuzzing?

Fuzzing is a technique used to discover flaws and vulnerabilities in software by sending large amounts of malformed, unexpected, or random data to the target program in order to trigger failures. Attackers could manipulate these errors and flaws to inject their own code into the system and compromise its security and stability. Fuzzing tools are commonly successful at identifying buffer overflows, DoS vulnerabilities, injection weaknesses, validation flaws, and other activities that can cause software to freeze, crash, or throw unexpected errors. The image on page 204 shows a popular fuzzer called "American Fuzzy Lop (AFL)" crashing a targeted application. Fuzzers don't always generate random inputs from scratch. Purely random generation is known to be an inefficient way to fuzz systems. Instead, they often start with an input that is pretty close to normal and then make lots of small changes to see which seem more effective at exposing a flaw. Eventually, an input will cause an interesting condition in the target, at which point the security team will need a tool that can determine where the flaw is and how it could be exploited (if at all) this observation and analysis tool is oftentimes bundled with the fuzzer, because one is pretty useless without the other

What are the DevOps and the DevSecOps?

Historically, the Quality Assurance team and the Software Development team would work together, but in isolation from the IT operations team who would ultimately have to deal with the end product. However, this led to conflict because the IT operations team would argue about the security of the software, while the software development and quality assurance team would argue about its functionality. It was a tug of war. A good way to solve this friction was to have both developers and operations staff (hence the name DevOps) work together throughout the software development process. "DevOps" is the practice of incorporating development, IT, and quality assurance (QA) staff into software development projects in order t0 align their incentives and enable frequent, efficient, and reliable releases of software products. Recently, the cybersecurity team is also being included in this multifunctional team, leading to the increasing use of the term "DevSecOps."

What is input validation?

If there is one universal rule to developing secure software, it is this: don't EVER trust any input entered by a user. This is not just an issue of protecting our systems against malicious attackers; it is equally applicable to innocent user errors. The best approach to validating inputs is to perform "context-sensitive whitelisting." In other words, consider what is SUPPOSED to be happening within the software system at specific points in which the input is elicited from the user, and then allow only the values that are APPROPRIATE. For example, if you are getting a credit card number from a user, you would only allow 16 consecutive numeric characters. Anything else would be disallowed.

What is static code analysis?

Input and parameter validation are two practices that can be verified by having someone examine the source code looking for vulnerable procedures. "Static code analysis" is a technique meant to identify software defects or security policy violations and is carried out by examining the code without executing the program (hence the term static). The term static analysis is generally reserved for automated tools that assist analysts and developers, whereas manual inspection by humans is generally referred to as "code review." Because it is an automated process, it allows developers and security staff to quickly scan their source code for programming flaws and vulnerabilities. The image on page 301 shows an example of a tool called Lapse+, which was developed by OWASP to find vulnerabilities in Java applications. This tool is highlighting an instance wherein user input is directly used, without sufficient validation, to build a SQL query against a database. In this particular case, this query is verifying the username and password. This insecure code block would allow a threat actor to conduct a SQL injection attack against this system. This actor would likely gain access by providing a string "foo' OR 1==1 --" if the database was on a MySQL server. Automated static analysis like that performed by Lapse+ provides a scalable method os security code review and ensures that secure coding policies are being followed. There are numerous manifestations of static analysis tools, ranging from tools that simply consider the behavior of single statements to tools that analyze the entire source code at once. However, you should keep in mind that static code analysis cannot usually reveal logic errors or vulnerabilities (that is, behaviors that are only evident at runtime), and therefore should be used in conjunction with manual code review to ensure a more thorough evaluation.

What are non-functional requirements?

Nonfunctional Requirements define a characteristic, constraint, or limitation of the system. Nonfunctional requirements are the main input to architectural designs for software systems. An example of a nonfunctional requirement, following the previous temperature scenario, would be that the system must be sensitive to temperature differences of one tenth of a degree Fahrenheit and greater. Nonfunctional requirements are sometimes called "Quality Requirements." Non-Functional = How the software works

Describe the development phase of the SDLC

Once all the requirements have been identified, the development team starts developing or building the software system. The first step in this phase is to design an architecture that will address the non-functional requirements. Recall that these are the ones that describe the characteristics of the system. On this architecture, the detailed code modules that address the features and functionality of the system are designed so that they satisfy the functional requirements. After the architecture and features are designed, software engineer start writing, integrating, and testing code. At the end of the development phase, the system has passed all unit, integration, and system tests and is ready to be rolled out onto a production network.

What are Code Reviews?

One of the best practices for quality assurance and securing coding is the code review, which is a systematic examination of the instructions that comprise a piece of software performed by someone other than the author of that code. This approach is a hallmark of a mature software development process. In fact, in many organizations, developers are not allowed to push out their software modules until someone else has signed off on them after doing code reviews. Think of this as proofreading an important document before you send it to an important person. If you try to proof read it yourself, you will probably not catch all of these embarrassing typos and grammatical errors as easily as if someone else were to check it. Code reviews go way beyond checking for typos, though that it's certainly one element of it. It all starts with a set of coding standards developed by the organization that wrote the software. This could be an internal team, an outsourced developer, or a commercial vendor. Obviously, code reviews of off-the-shelf commercial software are extremely rare unless the software is open source or you happen to be a major government agency. Still, each development shop will have a style guide or documented coding standards that cover everything from how to indent the code to when and how to use existing code libraries (DLLs). Therefore, a preliminary step to the code review is to ensure the author followed the team's standards. In addition to helping the maintainability of the software, this step gives the code reviewer a preview of the magnitude of work ahead; a sloppy coder will probably have a lot of other, harder-to-find defects in his code, and each of those defects is a potential security vulnerability.

What is the principle of "quality" in the SDLC?

Perhaps the most important principle is that of quality. "Quality" can be defined as fitness for purpose. In other words, how good something is at whatever it is meant to do. We don't have to worry about quality software crashing, corrupting our data under unforeseen circumstances, or being easy for someone to subvert. Sadly, many developers still think of functionality first (or only) when thinking about quality. When we look at things holistically, we should see that quality is the most important concept in developing secure software.

What is Regression Testing?

Software is almost never written securely on the first attempt. Organizations with mature development processes will take steps like the ones we've discussed in this chapter to detect and fix software flaws and vulnerabilities before the system is put into production. Invariably, errors will be found, leading to fixes. The catch is that fixing a vulnerability may very well inadvertently break some other function of the system or even create a new set of vulnerabilities. "Regression testing" is the formal process by which code that has been modified is tested to ensure no features and security characteristics were compromised by the modifications. Obviously, regression testing is only as effective as the standardized suite of tests that were developed for it. If the tests provide insufficient coverage, regression testing may not reveal new flaws that may have been introduced during the corrective process.

You have updated the server software and want to actively test it for new flaws. Which of these methods is the LEAST suitable for this requirement? Fuzzing, Static code analysis, stress testing, or web app vulnerability scanning?

Static code analysis because static code analysis cannot usually reveal logic errors or vulnerabilities, that is, behaviors that are only evident at runtime.

What is the SDLC?

The Software Development Life Cycle (SDLC) is Planning, Defining, Designing, Building, Testing, and Deployment. You do not need to memorize these. But, there are some additional things to know.

Which face of the SDLC often highlights friction between developers and business units due to integration and performance issues?

The implementation phase

Describe the Implementation phase of the SDLC

The implementation phase is usually when frictions between the development and operations teams start to become real problems unless these two groups have been integrated beforehand. The challenges in this transitionary phase include ensuring that the software will run properly on the target hardware systems, that it will integrate properly with other systems (e.g., AD), that it won't adversely affect the performance of any other system on the network, and that it does nto compromise the security of overall IS. If the org. used DevOps or DevSecOps from the beginning, most of the thorny issues will have been identified and addresses by this stage, which means implementation becomes simply an issue of provisioning and final checks.

What is Web App vulnerability scanning? How does it relate to secure coding?

This is a specific form of the vulnerability scanning we discussed in Part II of this book (Chapter 5 and 6). Unlike the tests we discussed when addressing secure coding practices, web app vulnerability scans are normally external tests that are conducted from the perspective of a malicious user. Like other vulnerability scanners, these will only try to identify vulnerabilities for which they have a plug-in. Some of the most common checks are listed here: 1) Outdated server components (e.g., those of which patches are available) 2) Misconfigured server 3) Secure authentication of users 4) Secure session management (e.g., random session tokens) 5) Information leaks (e.g., revealing too much information about the server) 6) XSS vulnerability 7) Improper use of HTTPS (for example, allowing SSL) Many commercial and open source web application vulnerability scanners are available. IN chapter 15 of this book, we address one of the most popular tools, called Nikto. Most of these scanners allow you to develop customized tests for your specific environment, so if you have some unique policies or security requirements that must be satisfied, it is worthwhile to learn how to write plug-ins or tests for your preferred scanner.

What is the process of secure coding?

This is all about reducing the number of vulnerabilities in a software product to a degree that can be mitigated by controls in the operational environment. In other words, secure code seldom has to be sent back to the programmers for fixing because any flaws can be compensated by operational controls. The truth is that there is no such thing as perfectly secure code, except for some exceptionally small and expensive systems. For most of us, however, this is about reducing, not eliminating, the flaws.

What is parameter validation?

This is where the values being received by the application are validated to be within defined limits before they are processed. The main difference between input validation and parameter validation is whether the application was expecting a value to come from a user input or from some part of the software system as a parameter. A "parameter" is a special kind of variable in computer programming language that is used to pass information between functions or procedures. The actual information passed is called an "argument." Attacks in this area deal with manipulating values that the system would assume are beyond the user being able to configure, mainly because there is not a mechanism provided in the interface to do so. An illustrative example is in the use of cookies for web applications. In an effort to provide a rich end-user experience, web application designers have to employ mechanisms to keep track of thousands of different web browsers that could be connected at any given time. The HTTP protocol by itself does not facilitate managing the state of a user's connection; it just connects to a server, gets whatever objects are requested in the HTML code, and then disconnects. Instead, we employ the technique of passing a cookie to the client to help the server remember things about the state of a connection. A cookie isn't a program but rather just data that is exchanged between the client and the server, stored by the client, and used to track the state of the interaction between them. Because accessing and modifying a cookie is usually beyond the reach of most users, some web developers don't think about this as a scenario when designing their systems. However, malicious actors can take advantage of cookies for attacks such as session hijacking.

What key process is often used to determine the usability and suitability of newly developed software before implementation across an organization?

User acceptance training.

You want to prevent user input from being interpreted as actual commands. What secure coding method should you use

Using parameterized queries is a developed practice for easily differentiating between code and user-provided input

To reduce the amount of data that must be examined and interpreted by a web application, what method can be used to catch errors before submission?

client-side validation because these are checks performed on data in the user browser or application before the data is sent to the server. This is practices along side server-side validation to improve security and reduce the load of the server.

Who is the CIS?

he Center for Internet Security (CIS) is a nonprofit org. with the ambitious goal of enhancing the cybersecurity of private and public organizations wround the world. It centers itself around collaboration, as is evident in the purpose of its 4 divisions. i. The Integrated Intelligence Center receives threat intelligence reports from public and private organizations and shares these with every other subscriber ii. The Multi-State Information Sharing and Analysis Center (MS-ISAC) performs a similar function, but it's focused on state, local, tribal, and territorial (SLTT) government partners and provides some vulnerability mitigation and IR. iii. The Trusted Purchasing Alliance, leverages the combined purchasing power of its SLTT and nonprofit partners to obtain deals that individual organizations would not be able to negotiate iv. The Security Benchmark Division produces two product streams that are the subject of the following subsections: a) System Design Recommendations: The CIS uses focus groups and consensus building among its hundreds of members and partners to ID best practices for secure system designs. Of particular interest to cybersecurity analysts are the 20 CIS Controls, which are generally agreed to significantly improve an organization's security posture. Besides identifying these controls, the CIS offers specific guidance on how to implement them in any organization's ISs. b) Benchmarks: Whereas the 20 CIS Controls provide global system design recommendations, the CIS Benchmarks are detailed guides to securing a specific platform (e.g., Microsoft Windows 10 Enterprise or Ubuntu Linux 16.04). These benchmarks are equivalent to the Security Technical Implementations Guides (STIGs) published by government entities such as the Defense Information System Agency (DISA) to protect federal information systems. In addition to the detailed benchmarks, the CIS also provides pre-hardened images of some open source platforms.

Who is the SEI and what have they contributed to secure coding? What is their CCMI?

i. The Software Engineering Institute (SEI) at Carnegie-Mellon University is a federaelly funded research and development center that has focused on secure software engineering for over 3 decades. The DoD has funded SEI since 1984 in large due to the realization that software systems, particularly, military ones, should be built securely. Among the products developed by SEI is a top ten list of secure coding practices. We briefly list these items: a) Validate all inputs b) Don't ignore compiler warnings c) Architect for security d) Avoid unnecessary complexity e) Deny by default f) Use least privilege g) Don't share data you don't have to h) Defend in depth i) Strive for quality j) Use secure coding standards

How does OWASP contribute to secure coding?

ii. The Open Web Application Security Project (OWASP) is an org. that deals specifically with web security issues. Along with a long list of tool, articles, and resources that developers can follow to create secure software, it also has individual member meetings (chapters) throughout the world. The group provides development guidelines, testing procedures, and code review steps, but is probably best known for the top ten web application security risk list that it maintains. The top risks identified by this group as of the writing of this book are as follows for web-based software: 1) Injection 2) Broken authentication and session management 3) XSS 4) Insecure direct object references 5) Security misconfigurations 6) Sensitive data exposure 7) Missing function level access controls 8) CSRF attacks 9) Using components with known vulnerabilities 10) Unvalidated redirects and forwards

How does the SANS Institute contribute to secure coding?

iii. The SANS Institute is one of the most respected organizations in the field of information security and cybersecurity training. One of its focus areas is application security (AppSec), and it regularly makes resources available on its web page . SANS has extended its popular "Securing the Human" program, which focuses on user security awareness, to include a specialized thread aimed at developers. Within it, SANS lists a large number of recommendations and best practices, many of which were introduced in this chapter. Here are additional ones of which you should be aware: 1) Display Graphic error message: Overly specific error messages to the user are not helpful to legitimate users, but can be a treasure trove of information for attackers trying to compromise the system. Error data should be stored securely in the logs so it is available to the people who will actually do something good with it: the O&M team 2) Implement account lockouts: Although many org. do this as a matter of habit on their DCs, a remarkable number of distributed applications will allow attackers to brute-force passwords unhindered. As a corollary, the mechanism by which users reset their passwords should be secure and, ideally use two factors. 3) Limit sensitive data use: it is oftentimes easier to provide too much information and let the users decide what they need than to carefully analyze every item of sensitive information before allowing it to show up in the system. This is particularly problematic when caching or form auto-completion is permitted in web browsers 4) Use HTTPS everywhere: Web apps and mobile apps should be using https 5) Use Parameterized SQL queries: We have already seen how easy and dangerous SQL injection attacks can be if we let user input strings be inserted directly into a SQL query. A very simply solution is to use parameterized queries in which the user inputs are prevented from being interpreted as SQL commands 6) Automate application deployment: Even secure software can be rendered vulnerable if it is improperly configured. The challenge is that complex systems have dozen (if not hundreds) of configuration parameters. Using automation (e.g., scripts) can ensure that every deployment is done to exactly the same (hopefully secure) standard.


संबंधित स्टडी सेट्स

the middle of diversity is dramatic

View Set

Medicare Part B (Medical Insurance) Original Medicare

View Set

Chapt 6 (Completing Business Messages)

View Set

PSY100, Mod 3: Neural and Hormonal Systems, Study questions

View Set

ACG2021 Ch 1: Framework For Financial Accounting

View Set

美国当代英语语料库(COCA) 500-13

View Set