CySA+ Chapter 2: Analyzing the Results of Reconnaissance

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Anomaly analysis is a type of Correlation analysis. What is it?

"Anomaly Analysis" attempts to answer the question "is this normal?" Obviously, we must first establish what normal means before we can answer that question. Anomaly-based monitoring uses a database of unacceptable traffic patterns identified by analyzing traffic flows. Anomaly-based systems are dynamic and create a performance baseline of acceptable traffic flows during their implementation process. The process by which we learn the normal state or flows of a system is called "baselining." It is helpful to create a baseline for a collection of systems rather than an individual system alone. Anomaly analysis focuses on measuring the deviations from this baseline and determining whether that deviation is statistically significant. This last part is important because everything changes constantly. The purpose of anomaly analysis is to determine whether the change could be reasonably expected to be there in a normal situation, or whether it is worth investigating. An example of an anomaly would be a sudden increase in network traffic at a user's workstation. Without a baseline, we would not be able to determine whether the traffic spike is normal. Suppose we baselined that particular system, and this amount of traffic is significantly higher than any other data point. The event could be classified as an outlier and deemed anomalous. But, if we took a step back and looked at the baselines for clusters of workstations, we could find out that the event is consistent with workstations being used by a specific type of user (say, in the media team). Once in a while, one of them sends a large burst (say, to upload a finished film clip), but most of the time, they are fairly quiet.

Availability analysis is a type of Correlation analysis. What is it?

"Availability Analysis:" Sometimes, our focus rests on protecting the confidentiality and integrity of our systems at the expense of preventing threats to their availability. Availability analysis is focused on determining the likelihood that our systems will be available to authorized users in a variety of scenarios. Perhaps the most common of these is the mitigation of DDoS attacks, which is in part accomplished by acquiring the services of an anti-DDoS company such as Akamai. These services can be expensive, so an availability analysis could help make the business case for them by determining at which point the local controls would not be able to keep up with a DDoS attack and how much money the org. would lose per unit of time that it was unavailable to its customers. Another application of availability analysis is determining the consequences of the loss of a given asset of set of assets. For example, what would be the effects on the business processes of the loss of a web server, or a database server storing the accounting data or the CEO's computer? Obviously, you cannot realistically analyze every asset in your system, but there are key resources whose unavailability could cripple an org. Performing an availability analysis over those can shed light on how to mitigate the risk of their loss.

Behavioral analysis is a type of Correlation analysis. What is it?

"Behavioral Analysis" also attempts to find anomalous behavior. In fact, both anomalous and behavior analysis terms are used interchangeably. However, behavior analysis looks at multiple correlated data points to define anomalous behavior. Behavior-based systems works by creating a baseline of normal traffic and then comparing that baseline to real-time traffic to detect anomalies. Behavior-based monitoring detects changes in normal operating data sequences and identifies abnormal sequences. When behavior-based systems are installed, they have no performance baseline or acceptable traffic pattern defined. Initially, these systems will report all traffic as a threat. Over time, however, they LEARN which traffic is allowed, or acceptable behavior, and which is not with the assistance of an administrator. For example, it may be normal behavior for a user to upload large files to an Amazon cloud platform during business hours, but it is abnormal for the user to upload large files to a Google cloud platform after hours. So data points relating to size, destination, and time are used together. In a strict interpretation of anomaly analysis, in which data points could be taken in isolation, you might have received two alerts: one for destination and one for time, which you would then have to correlate manually. Note: you should not see questions on the CySA+ exam asking you to differentiate between anomaly and behavior analysis. You could, however, see questions in which you must recall that they are both examples of data correlation and analytics (as opposed to point-in-time data analysis). You should also remember that they both leverage baselines.

What is Bro?

"Bro," AKA Bro-IDS, is both signature and anomaly-based. Instead of only looking for individual packets and deciding whether or not they match a rule, it creates events that are inherently neither good nor bad; they simply say that something happened. An advantage in this approach is that Bro will track sessions to ensure they are behaving as expected, and it keeps track of their state. All this data is retained, which can help with forensic investigations. These events are then compared to policies to see what actions, if any, are warranted. It is here that Bro's power really shines. These policies can do anything, from sending an e-mail or text message to updating internal metrics to disabling a user account. Another powerful feature of Bro is the ability to extract complete executables from network streams and send them to another system for malware analysis.

What is "Correlation Analysis?"

"Correlation Analysis" looks at multiple collections of data in an attempt to find patterns that may point to an event of interest. The tools you use to capture the data oftentimes include at least some basic tools to analyze it. If nothing else, most of them offer filters that allow you to focus on items of interest. These features, however, will typically be helpful only in a pinch when you don't have access to anything else. For your real analysis work, you will need a comprehensive tool with which to simultaneously look at all the data at your disposal.

Heuristics analysis is a type of correlation analysis. What is it?

"Heuristic Analysis" uses known best practices and characteristics in order to identify and fix issues within the network. Heuristic means "rule of thumb" thus best practices. It uses artificial intelligence and is the latest category of IDS and IPS. Heuristics is an approach based on experience rather than theory. There are problems in computing that are known to be provably unsolvable, and yet we are able to use heuristics to get results that are close enough to work for us. This is commonly seen in malware detection. We know it's impossible to detect malware with 100 percent accuracy, but we also know that the majority of malware samples exhibit certain characteristics or behaviors. These, then, become our heuristics for malware detection. Next Generation Firewalls (NGFs) are devices that, in addition the usual firewalling features, include capabilities such as malware detection. This detection is usually accomplished in NGFs through heuristic analysis of the inbound data. The first step is to take a suspicious payload and open it in an a specially instrumented virtual machine (VM) within or under the control of the NGF. The execution of the payload is then observed, looking for telltale malware actions such as replicating itself, adding user accounts, and scanning resources. Obviously, certain malware families might not attempt any of these actions, which is what makes this approach heuristic: it is practical but not guaranteed.

What is Nagios Core?

"Nagios Core," is one the most popular open source resource-monitoring tools. It allows you to track the status of your network devices, including workstations, servers, switches, routers, and indeed anything that can run a Nagios agent or send data to a plug-in. For each of those devices, it can monitor specific metrics , such as processor or disk utilization. When things go wrong, Nagios will log the event and then either send a notification via e-mail or text to take a specific action by running an event handler that can correct the problem. Nagios is remarkably easy to set up initially, but it's also scalable enough to handle complex environments and procedures.

What is Point-in-Time Analysis?

"Point-in-Time Analysis" looks at data pertaining to a specific point or window in time. It is perhaps most familiar to incident responders. This first kind of analysis tends to examine one item, whether it be an alert, a packet, or a system event, looking for interesting information. The amount of data we could analyze on a typical network is immense. For this reason, point-in-time analysis is most helpful when we start off with a clue. For example, if an IDS has generated an alert on a specific session, we may do point-in-time analysis on the packets comprising that session or any system events that were recorded around that time. Either way, we'll be looking at individual items at a point in time in order to discover broader goals or objectives for a threat actor.

Protocol Analysis is a type of Point-in-Time analysis. What is it?

"Protocol Analysis" deals with the way in which the packets conform to the protocol they are supposed to be implementing. For instance, the ICMP allows echo request and echo reply packets to have a payload as along as the total packet length is no greater than the network's MTU. This feature was intended to support diagnostic messages, though, in practice, this is almost never seen. What we do see, however, are threat actors exploiting this protocol to establish ICMP tunnels in which two hosts create a clandestine communications channel using echo requests and replies. Conducting an analysis of ICMP would reveal these channels. Another application of protocol analysis is in determining the security, or, conversely, vulnerabilities of a given protocol. Suppose you purchase or develop an application for deployment in your organization's systems. How would you know the risks it would introduce unless you had a clear understanding of how its protocols were expressed on your network? Performing protocol analyses can be as simple as sniffing network traffic to ensure that all traffic is encrypted, or as complex as mathematical models and simulations to quantify the probabilities of unintended efforts.

What is Snort?

"Snort" is probably the best-known NIDS in the open source community. However, it is more than a NIDS because it can operate as a packet analyzer or as a NIPS. It has a an abundance of rules to choose from. Snort rules have two parts: the header and options. The "header" specifies the action Snort will take (for example, alert or drop) as well as the specific protocol, IP address, port numbers, and directionality (for example, directional or bidirectional). The real power of the rules is in the "options." In this section of the rule, one can specify where exactly to look for signs of trouble as well as what message to display to the user or record in the logs. The following rule shows how to detect a backdoor in the network: Alert tcp $EXTERNAL_NET any -> $HOME_NET 7597 (msg:"MALWARE-BACKDOOR QAZ WORM Client Login access"; content:"qazwsx.hsq";) In this case, we are looking for inbound TCP packets destined for port 7597 containing the text "qazwsx.hsq." If these are found, Snort will raise an alert that says "MALWARE-BACKDOOR QAZ Worm Client Login access."

What is Suricata?

"Suricata" is a more powerful version of Snort. It can use Snort signatures, but can also do a lot more. It is "multithreaded" while Snort is not. Suricata can take advantage of hardware acceleration (that is, using the graphics accelerator to process packets). Like Bro, it can also extract files from the packet flows for retention or analysis. Like both Bro and Snort, Suricata can be used as an IPS.

Traffic analysis is a type of Point-in-Time analysis. What is it?

"Traffic Analysis" is another way to detect anomalous behaviors on your network by examining where the traffic is originating and terminating. If you are monitoring the communications of your nodes in real time and you suddenly see an odd end point, this could be an indicator of a compromise. Admittedly, you would end up with many false positives every time someone decided to visit a new web site. An approach to mitigating these false alarms is to use automation (for example, scripts) to compare the anomalous end points with the IP addresses of known or suspected hosts. Here are some things to look for in a traffic analysis: 1. Look for unknown end points when monitoring in real time 2. Use scripts to compare anomalous endpoints with the IP address of known or suspected hosts. 3. Monitor the volume of traffic in a given portion of your system. Because a large increase in traffic to and from a given host (top talker/top listener) could indicate a compromise 4. You could use a tool called "Etherape," which graphically depicts all known end points, both internal and external to your org., and shows circles around them whose size is proportional to the volume of traffic coming from them at any point in time. A host performing a port scan, for example, would show up as a very large circle (attacker). But then again, so would a server that is streaming high-definition video.

Trend analysis is a part of correlation analysis. What is it and can you name the 3 types of trend analytics?

"Trend Analysis" is the study of patterns over time and why they change. There are a number of applications for this technique. Most commonly, trend analysis is applied to security by tracking evolving patterns of adversaries' behaviors. Every year, a number of well-known security firms will publish their trend analyses and make projections for the next year based on the patterns discovered. This approach would, for example, prompt you to prioritize DDoS mitigations if these attacks are trending up and/or in the direction of your specific sector. "Internal Trends" can reveal emerging risk areas. For example, there may be a trend in your organization to store increasing amounts of data in cloud resources such as Dropbox. Although this may make perfect sense from a business perspective, it could entail new or increased risk exposure for confidentiality, availability, forensic investigations, or even regulatory compliance. By noting this trend, you will be better equipped to decide the point at which the risk warrants a policy change or the acquisition of a managed solution. "Temporal Trends" show patterns related to time. There are plenty of examples of organizations being breached late on Friday night in hopes that the incident will not be detected until 3 days later. Paradoxically, because fewer users will be on the network over the weekend, this should better alert defenders to detect the attack since the background traffic would presumably be lower. Another temporal trend could be an uptick in events in the days leading up to the release of a quarterly statement, or an increase in phishing attempts around tax season. These trends can help us better prepare our technical and human assets for likely threats to come. "Spatial Trends" exist in specific regions. Though we tend to think of cyberspace as being almost independent of the physical world, in truth, every device exists in a very specific place (or a series of places for mobile ones). It is common practice, for instance, to give staff members a "burner" laptop when they travel to certain countries. This device is not allowed to connect to the corporate network, has a limited set of files, and is digitally wiped immediately upon the user's return. This practice is the result of observing a trend of sophisticated compromises of devices traveling to those countries. Another example would be the increasing connection of devices to free Wi-Fi networks at local coffee shops, which could lead to focused security awareness training and the mandated use of VPN connections

What is ntopng?

"ntopng" is arguably one of the most popular open source NetFlow analyzers is "ntopng." This tool is able to act as both a NetFlow collector (which receives and aggregates data from multiple devices) and an analysis console. It also monitors a variety of other network parameters and data not strictly associated with NetFlow. The tools has a web-based interface, is extensible, and runs on most flavors of Linux as well as Mac OS and Windows.

Name 5 valuable data sources mentioned in chapter 2.

1. Firewall Logs, 2. IDS/IPS logs and alerts, 3. Packet Captures, 4. System logs, and 5. Nmap scan results

What are the 4 problems we are presented with during a packet analysis?

1. Full packet captures yield a lot of results, 2. Full packet captures take up a lot of storage, 3. Full packet captures have legal implications, and 4. Encrypted packets

There are 14 fields in an IP header. Name them

1. version, 2. Internet Header Length (IHL), 3. Type of Service (ToS), 4. Differentiated Services Code Point (DSCP), 5. Total Length, 6. Identification, 7. Flags, 8. Fragment Offset, 9. Time-to-Live (TTL), 10. Protocol 11. Header Checksum, 12. Source IP address, 13. Destination IP address, and 14. IP options. This is the optional field and it includes subfields like padding.

Packet analysis is a type of Point-in-Time Analysis. What is it?

A "Packet Analysis" can give us a lot of information from packet capture data. In fact, if given enough, one can re-create a very precise timeline of events around any network security incident. The ideal case is one in which there are strategically placed sensors throughout the network doing full packet captures. The resulting data files contain a wealth of information, but can consume enormous amounts of storage space. This can be a challenge, particularly for security teams with limited resources. Another challenge can be finding the data in such a sea of packet captures.

Which kind of packet capture technique is preferred for a resource-limited environment/

A header capture

What's one way mentioned in chapter 2 that we can side-step the encryption problem involved with packet capture analyses?

A way to address this issue is to use HTTPS, or "SSL proxies," which are proxy servers that terminate TLS or SSL connections, effectively acting like a trusted MitM that allows the organization to examine or capture the contents of the otherwise encrypted session. If an organization controls the configuration of all clients on its network, it is not difficult to add a CA to its browsers so that the users will not notice anything odd when they connect to an encrypted site through a decrypting proxy

Why might it be useful to analyze individual system logs in addition to network traffic?

Because activity on these endpoints is usually much easier to observe than their network traffic. Think about it. On a large network, analyzing all that traffic on Wireshark can be hard.

What is the name of the Windows utility that allows you to easily monitor various programs, security, and system activity logs on your computer?

Event Viewer.

We can implement a packet capture by doing a "header capture" or a "full capture." Why wouldn't you want to do a full packet capture?

Full packet captures are obviously better than head-captures only; however, they require very large data stores, introduce legal issues, and concerns regarding privacy of the captured data.

Log analysis is a type of correlation analysis. What is it?

Log analysis is usually a centralized method of collecting and analyzing logs. Broadly speaking these tools fall into 3 categories: Security Information and Event Management (SIEM) systems collect data from a variety of sensors, perform pattern matching and correlation of events, generate alerts, and provide dashboards that allow analysts to see the state of the network. One of the best-known commercial solutions is "Splunk," while on the open source side, the "Elasticsearch-Logstash-Kibana (ELK)" stack is very popular. "Big Data Analytics Solutions" are designed to deal with massive data sets that are typically beyond the range of SIEMs. The term "big data" refers to data sets that are so big in terms of volume (that is, the number of records), velocity (the rate at which new records are added), and variability (the number of different data formats), that traditional databases cannot handle them. Big data platforms are normally used to complement SIEMs, not to replace them. "Locally Developed Analytics Solutions" are typically scripts developed in-hose by security analysts. PowerShell and Python are popular languages in which to develop these tools, which are typically built to perform very specific functions in addition to or in lieu of a SIEM.

A packet analysis can be difficult, especially with a full packet capture, but most packet capturing tools enable use the ability to do two things to assist us. What are they?

Most packet analysis tools will allow you to filter data in two ways: 1. Filter the capture in real-time 2. Filter the display after the capture is complete.

Explain how NetFlow works

NetFlow is based on the idea of flows from one specific place to another. When a packet arrives at a NetFlow-enabled network device and it does not belong to any known flows, the device will create a new flow for it and start tracking any other related packets. After a preset amount of time elapses with no more packets in a flow, that flow is considered to be finished. Each of these flows are cached in a "Flow cache." A single entry in a flow cache normally contains information, such as destination and source addresses, destination and source ports, the source on the device running that flow, and total number of bytes of that flow. The NetFlow-enabled device will aggregate statistics about the flow in the flow cache, such as duration, number of packets, and number of bytes, and then export the record. NetFlow collectors (typically a server doing traffic analysis) will then receive the data, clean it up a bit if necessary, and store it. The final component of the system is the analysis console, which allows analysts to examine the data and turn it into actionable information. Notice that the flow data is only available for analysis AFTER the flow has ended. This means that this type of analysis is better suited for forensic investigations than for real-time mitigation of attacks. Furthermore, NetFlow captures aggregate statistics and not detailed information about the packets. This type of analysis is helpful in the early stages of an investigation to point the analysts toward the specific packets that should be analyzed in detail (Assuming the organization is also doing packet captures). The image below shows a screen shot of a popular tool called "LiveAction."

NetFlow analysis is a type of Point-in-Time analysis. What is it?

Netflow is a system developed by Cisco as a packet-switching technology in the late 1990s. Although it didn't serve that role for long, it was repurposed to provide statistics on network traffic, which is why it's important for analysts today. The way it works is by grouping all packets into "Flows" that share the following characteristics: 1. Arrival interface at the network device (for example, switch or router0). 2. Source and destination IP address 3. Source and destination port numbers (or the value of zero if not TCP or UDP) 4. IP type of service.

Nmap can do port scanning, which identifies new services and misconfiguration changes of devices on your network. But, what's one other important thing it can do for your security?

One of the most important defensive measures you can take is to maintain an accurate inventory of all the hardware and software on your network. Nmap can help with this, but various open source and commercial solutions are also available for IT asset management.

You know what a firewall does and how it works, but here are a couple things you should know for the exam.

Pay attention to both inbound and outbound traffic. Be aware you can configure the amount of information a firewall logs, but most already provide ample logs by default.

What is SLK and how does it work?

SLK is an open source SIEM system. Elastic-Logstash-Kibana (ELK) is not a tool as much as it is a system of tools. The name is an acronym, which are all open source and Java based. These are the 3 main component tools of ELK. Elasticsearch is one of the most popular search engines. Its main purpose is to index data so that it can quickly search large data sets for specific attributes. Elasticsearch takes care of storing the data, but does not provide a high degree of durability compared to other data management systems. Logstash is a processing pipeline that ingests data from a variety of sources (such as firewalls and event logs), performs transformations on it (for example, removing PII from records), and then forwards it to a data store (or stash). Kibana is a visualization plug-in for Elasticsearch that allows you to develop custom visualizations and reports. It comes with predefined queries and visualizations for common security analytics, but also enables the creation of custom queries. Together, the ELK stack performs many of the same functions as Splunk, but the commercial solution has extra features.

What is Splunk and how does it work?

Splunk is a commercial SIEM solution. Splunk can ingest data from virtually any source. This data is then indexed and stored. One of the features of Splunk is that it allows you to ingest the raw data straight into the "Indexer" by using a "Universal Forwarder," or you can do the preprocessing and indexing near the source using a "Heavy Forwarder" and then send to the Indexer a smaller amount of data that is already stored. The second approach is helpful to reduce the amount of data that needs to travel from the individual sources to the indexers. When you think that large corporate environments can generate hundreds of gigabytes of data each day, Heavy Forwarders make a lot of sense. Once at the Indexers, the data is stored redundantly to improve availability and survivability. Apart from Forwarders and Indexers, the other main components of Spunk is the "Search Heads," which are the web-based front ends that allow users to search and view the data.

What is syslog?

Syslog is a messaging protocol developed at the University of California, Berkeley, to standardize system event reporting. The syslog server will gather syslog data over UDP port 514 (or TCP port 514 in the case that message delivery needs to be guaranteed).

Where is the default location for the UNIX and Linux application logs located?

The default location for the Linux OS and applications log is the "/var/log" directory.

What is the local syslog process in UNIX and Linux environments called?

The local syslog process in UNIX and Linux environments, called "syslogd," collects messages generated by the device and stores them locally on the file system. This includes embedded systems found in routers, switches, and firewalls, which use variants and derivatives of the UNIX system. There is, however, no preinstalled syslog agent in the Windows environment.

Here is some useful info to know about a wireless analysis

The most important step to Security analysis of your WLAN is know your devices. Know your WAPs and wireless clients How do you know when something is wrong? Create a baseline. Know what "normal" is supposed to look like. Have a known-good list of WAPs and client devices. Have the WAP's protocol, channel, location, MAC/IP address. Finding rogue APs could be hard since attackers can change the MAC address of an AP. So it would be wise to use WPA2 Enterprise and IEEE 802.1x. Absent of authentication, you will have a difficult time identifying all but the most naïve intruders connected to your WLAN.

Look at the captured packet, What is the total size, in bytes, of the packet's payload?

The total size of the packet is 120 bytes. The header is 20 bytes. 120 - 20 = 100 bytes. Therefore, the payload is 100 bytes.

There are two types of analysis. What are they?

The two types of analysis are Point-in-Time Analysis and Correlation Analysis. Point-in-Time Analysis includes the following: 1. Packet analysis, 2. Protocol analysis, 3. Traffic analysis, 4. NetFlow analysis, 5. Wireless analysis Correlation Analysis includes the following: 1. Log analysis, 2. Anomaly analysis, 3. Behavioral analysis, 4. Heuristics analysis, 5. Trend analysis, 6. Availability analysis

There are two types of system logs you should know for the exam. What are they?

The two types of system logs are the "Windows Event Log" and "Syslog."

Wireless analysis is a type of Point-in-Time analysis. What is it?

To conduct a WLAN analysis, you must first capture data. The wireless NIC must be placed from "Managed" mode into "Monitor" mode to see all available WLANs and their characteristics without connecting to any. This is the mode we need in order to perform a WLAN audit. Fortunately, WLAN analyzers, such as "Kismet," take care of these details and allow us to simply run the application and see what is out there.

What is the Windows Event Log?

Windows has an event logging system, known simply as the "Windows Event Log." You can access the Windows Event logs, by opening up "Event Viewer" by either cmd prompt eventvwr. Each log entry is called an "event" and is assigned a unique identifier. It also includes a timestamp on each event and a description of each event. Typically, writing to these log files can only be done by a special syslog daemon. Windows defines 3 possible sources of logs in the Windows Event log. 1. "System log:" The System log can only be written to by the OS itself. An example might be a network connection that was made. 2. "Application log:" The Application log may be written to by ordinary applications. An example is an application unexpectedly crashing. 3. "Security log:" The Security log can only be written to by a special Windows service, known as the Local Security Authority Subsystem Service, visible in Process Explorer as Isass.exe. This service is responsible for enforcing security policies, such as access control and user authentication. An example is a user failing to properly authenticate. The two security logs you should be on the lookout for are Failed logons and User Account changes. Note: These 3 sources are all referred to as "audit logs." Syslog is not a native Windows application, even in Windows Server 2012. You'll have to download and install a third-party syslog agent for Windows operating systems.

In the Windows Event Log, if you wanted to specify exactly which types of events you'd like to get more detail on, what could you do?

You can specify exactly which types of event you'd like to get more detail on using the "Filter Current Log" option in the side panel. The resulting dialog is shown here.

What are some of the syslog severity codes?

syslog severity codes as defined by RFC 5424 are displayed here.


Kaugnay na mga set ng pag-aaral

Chapter 15 Extra Credit Questions

View Set

Chapter 14: Managerial Decision-Making Under Uncertainty

View Set

Honors Physics-Chapter 3 CONCEPTUAL QUESTIONS

View Set

AQA A-Level Chemistry: acids and bases

View Set

Series 63 wrong answers to study

View Set