zBonus 1 - CCSP/CCSK - Certified Cloud Security Professional

Réussis tes devoirs et examens dès maintenant avec Quizwiz!

Traditional networking model

A layered approach with physical switches at the top layer and logical separation at the hypervisor level.

Remote Desktop Protocol (RDP)

A protocol that allows for separate channels for carrying presentation data, serial device communication, licensing information, and highly encrypted data (keyboard, mouse activity).

Cloud Provider

A service provider who offers customers storage or software solutions available via a public network, usually the Internet.

Content Delivery Network (CDN)

A service where data is replicated across the global Internet.

Federal Information Security Management Act (FISMA) of 2002

-requires that federal agencies implement an information security program that covers the agency's operations. - also requires that gov agencies include the activities of contractors in their security mgmt programs

Datacenter Tiers

4 - Best - 99.999% 3 2 1 - Worst Emergency egress redundancy can be found across all tiers. Health and human safety is #1 priority.

security incident

any occurrence that takes place during a certain period of time

Personal Data

Any information relating to an identified or identifiable natural person data subject; an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural, or social identity.

Restatement (Second) Conflict Law

Basis used for determining which laws are most appropriate in a situation where conflicting laws exist.

ISO 27017

Cloud specific security controls.

Docker

Computer program that performs operating-system-level virtualization also known as containerization. Containers are designed to be state-less (protocol does not require the server to retain session information or status about each communicating partner for the duration of multiple requests) and ephemeral (lasting for a very short time). User can leverage data volumes inside the docker contain or can utilize a NFS backend.

Honeypot

Consists of a computer, data, or a network site that appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers.

Business requirements

Drives security controls.

European Union Agency (ENISA)

EU agency created to advance the functioning of the internal market. ENISA is a centre of excellence for the European Member States and European institutions in network and information security, giving advice and recommendations and acting as a switchboard for information on good practices. Top 8 security risks based on likelihood and impact (out of 35) vendor lock-in isolation failure insecure or incomplete data deletion LOSS OF GOVERNANCE COMPLIANCE RISKS MALICIOUS INSIDER RISKS INSECURE OR INCOMPLETE DATA DELETION: DATA PROTECTION MALICIOUS INSIDER: MANAGEMENT INTERFACE COMPROMISE

Virtualization Technologies

Enable cloud computing to become a real and scalable service offering due to the savings, sharing, and allocations of resources across multiple tenants and environments. Virtualization risks include but are not limited to the following: ■■ Guest breakout: This occurs when there is a breakout of a guest OS so that it can access the hypervisor or other guests. This is presumably facilitated by a hypervisor flaw. ■■ Snapshot and image security: The portability of images and snapshots makes people forget that images and snapshots can contain sensitive information and need protecting. ■■ Sprawl: This occurs when you lose control of the amount of content on your image store.

Private Cloud Project

Enable their IT infrastructure to become more capable of quickly adapting to continually evolving business needs and requirements.

BCP / DR Kit

Flashlight, documentation equipment, asset inventory...

Database as a Service

In essence, a managed database service.

Layer - Physical Resource

Includes all the physical resources used to provide cloud services, most notably, the hardware and the facility.

Gap analysis

Primary purpose is to start the benchmarking process

Quality of Service (QoS)

Refers to the capability of a network to provide better service to selected network traffic over various technologies, including Frame Relay, Asynchronous Transfer Mode (ATM), Ethernet and 802.1 networks, SONET, and IP-routed networks that may use any or all of these underlying technologies

Policies

Should be maintained, reviewed, and enforced.

Authorization

The granting of right of access to a user, program, or process.

BYOD

Tips: DLP, local encryption, multi-factor authentication.

Secure Design and Development

Train>Define>Design>Develop>Test Training: Three different roles will require two new categories of training. Development, operations, and security should all receive additional training on cloud security fundamentals (which are not provider specific), as well as appropriate technical security training on any specific cloud providers and platforms used on their projects. There is typically greater developer and operations involvement in directly architecting and managing the cloud infrastructure, so baseline security training that's specific to the tools they will use is essential. Define: The cloud user determines the approved architectures or features/tools for the provider, security standards, and other requirements. This might be tightly coupled to compliance requirements, listing, for example, what kind of data is allowed onto which cloud services (including individual services within a larger provider). At this step the deployment processes should also be defined, although that is sometimes finalized later in a project. Security standards should include the initial entitlements for who is allowed to manage which services in the cloud provider, which is often independent of the actual application architecture. It should also include pre-approved tools, technologies, configurations, and even design patterns. Design: During the application design process, especially when PaaS is involved, the focus for security in cloud is on architecture, the cloud provider's baseline capabilities, cloud provider features, and automating and managing security for deployment and operations. We find that there are often significant security benefits to integrating security into the application architecture since there are opportunities to leverage the provider's own security capabilities. For example, inserting a serverless load balancer or message queue could completely block certain network attack paths. This is also where you perform threat modeling, which must also be cloud and provider/platform specific. Develop: Developers may need a development environment with administrative access to the cloud management plane so that they can configure networks, services, and other settings. This should never be a production environment or hold production data. Developers will also likely use a CI/CD pipeline, which must be secured—especially the code repository. Test: Security testing should be integrated into the deployment process and pipeline. Testing tends to span this and the Secure Deployment phase, but leans towards security unit tests, security functional tests, Static Application Security Testing (SAST), and Dynamic Application Security Testing (DAST). Due to the overlap, we cover the cloud considerations in more depth in the next section. Organizations should also rely more on automated testing in cloud. Infrastructure is more often in scope for application testing due to "infrastructure as code," where the infrastructure itself is defined and implemented through templates and automation. As part of security testing, consider requiring flagging features for security-sensitive capabilities that may require deeper security review, such as authentication and encryption code.

Physical access

Unlikely, vendor prefers to not reveal layout and site controls to outsiders.

OWASP Top 10

■■ "A1—Injection: Injection flaws, such as SQL, OS, and LDAP injection occur when untrusted data is sent to an interpreter as part of a command or query. The attacker's hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. ■■ "A2—Broken Authentication and Session Management: Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users' identities. ■■ "A3—Cross-Site Scripting (XSS): XSS flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim's browser, which can hijack user sessions, deface websites, or redirect the user to malicious sites. ■■ "A4—Insecure Direct Object References: A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, or database key. Without an access control check or other protection, attackers can manipulate these references to access unauthorized data. ■■ "A5—Security Misconfiguration: Good security requires having a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Secure settings should be defined, implemented, and maintained, as defaults are often insecure. Additionally, software should be kept up to date ■■ "A6—Sensitive Data Exposure: Many web applications do not properly protect sensitive data, such as credit cards, tax IDs, and authentication credentials. Attackers may steal or modify such weakly protected data to conduct credit card fraud, identity theft, or other crimes. Sensitive data deserves extra protection such as encryption at rest or in transit, as well as special precautions when exchanged with the browser. ■■ "A7—Missing Function Level Access Control: Most web applications verify function-level access rights before making that functionality visible in the UI. However, applications need to perform the same access control checks on the server when each function is accessed. If requests are not verified, attackers will be able to forge requests in order to access functionality without proper authorization. ■■ "A8—Cross-Site Request Forgery (CSRF): A CSRF attack forces a logged-on victim's browser to send a forged HTTP request, including the victim's session cookie and any other automatically included authentication information, to a vulnerable web application. This allows the attacker to force the victim's browser to generate requests the vulnerable application thinks are legitimate requests from the victim. ■■ "A9—Using Components with Known Vulnerabilities: Components, such as libraries, frameworks, and other software modules, almost always run with full privileges. If a vulnerable component is exploited, such an attack can facilitate serious data loss or server takeover. Applications using components with known vulnerabilities may undermine application defences and enable a range of possible attacks and impacts. ■■ "A10—Invalidated Redirects and Forwards: Web applications frequently redirect and forward users to other pages and websites, and use untrusted data to determine the destination pages. Without proper validation, attackers can redirect victims to phishing or malware sites, or use forwards to access unauthorized pages."

SAST

A set of technologies that analyze app source code, byte code, and binaries for coding and design problems that would indicate a vulnerability.

Cloud - Privacy & Security Concerns (rare call out here)

Principal-Agent Problem. The principal-agent problem occurs when the incentives of the agent (i.e., the cloud provider) are not aligned with the interests of the principal (i.e., the organization). Because it can be difficult to determine the level of effort a cloud provider is exerting towards security and privacy administration and remediation, the concern is that the organization might not recognize if the service level is dropping or has dropped below the extent required. One confounding issue is that increased security efforts are not guaranteed to result in noticeable improvements (e.g., fewer incidents), in part because of the growing amounts of malware and new types of attacks. Attenuation of Expertise. Outsourced computing services can, over time, diminish the level of technical knowledge and expertise of the organization, since management and staff no longer need to deal regularly with technical issues at a detailed level. As new advancements and improvements are made to the cloud computing environment, the knowledge and expertise gained directly benefit the cloud provider, not the organization. Unless precautions are taken, an organization can lose its ability to keep up to date with technology advances and related security and privacy considerations, which in turn can affect its ability to plan and oversee new information technology projects effectively and to maintain accountability over existing cloud-based systems.

Record

A data structure or collection of information that must be retained by an organization for legal, regulatory or business reasons.

Cloud Database

A database accessible to clients from the cloud and delivered to users on demand via the Internet.

Database Activity Monitoring (DAM)

A database security technology for monitoring and analyzing database activity that operates independently of the database management system (DBMS) and does not rely on any form of native (DBMS-resident) auditing or native logs such as trace or transaction logs.

Hardware Security Module (HSM)

A device that can safely store and manage encryption keys. This can be used in servers, data transmission, protecting log files, etc.

Personal Cloud Storage

A form of cloud storage that applies to storing an individual's data in the cloud and providing the individual with access to the data from anywhere.

Security Information and Event Management (SIEM)

A method for analyzing risk in software systems. It is a centralized collection of monitoring of security and event logs from different systems. SIEM allows for the correlation of different events and early detection of attacks. Should include: centralization of logs, trend analysis, dashboarding..

Erasure coding (EC)

A method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations or storage media.

Infrastructure as a Service (IaaS)

A model that provides a complete infrastructure (e.g. servers, internetworking devices) and allows companies to install software on provisioned servers and control the configurations of all devices. Key features: -Scale (ultimate control to ensure the necessary resources are available) -Converged network and IT capacity pool. -Self-service and on-demand capacity.

Cloud OS

A phrase frequently used in place of Platform as a Service (PaaS) to denote an association to cloud computing.

Patch management

A patch management process should address the following items: ■ Vulnerability detection and evaluation by the vendor ■ Subscription mechanism to vendor patch notifications ■ Severity assessment of the patch by the receiving enterprise using that software ■ Applicability assessment of the patch on target systems ■ Opening of tracking records in case of patch applicability ■ Customer notification of applicable patches, if required ■ Change management ■ Successful patch application verification ■ Issue and risk management in case of unexpected troubles or conflicting actions ■ Closure of tracking records with all auditable artifacts Key Issues: ■ There's a lack of service standardization. For enterprises transitioning to the cloud, lack of standardization is the main issue. For example, a patch management solu- tion tailored to one customer often cannot be used or easily adopted by another customer. ■ Patch management is not simply using a patch tool to apply patches to endpoint systems, but rather a collaboration of multiple management tools and teams, such as change management and patch advisory tools. ■ In a large enterprise environment, patch tools need to be able to interact with a large number of managed entities in a scalable way and handle the heterogeneity that is unavoidable in such environments. ■ To avoid problems associated with automatically applying patches to endpoints, thorough testing of patches beforehand is absolutely mandatory. Note: ■ When a customer's VMs span multiple time zones, patches need to be scheduled carefully so the correct behavior is implemented. ■ For some patches, the correct behavior is to apply the patches at the same local time of each VM, such as applying MS98-021 from Microsoft to all Windows machines at 11:00 p.m. of their respective local time. ■ For other patches, the correct behavior is to apply at the same absolute time to avoid a mixed-mode problem where multiple versions of software are concurrently running, resulting in data corruption.

Restatement (second) conflict of laws

A restatement is a collation of developments in the common law (that is, judge made law, not legislation) that inform judges and the legal world of updates in the area. Conflict of laws relates to a difference between the laws. In the United States, the existence of many states with legal rules often at variance makes the subject of conflict of laws especially urgent. The restatement (second) conflict of laws is the basis for deciding which laws are most appropriate when there are conflicting laws in the different states. The conflicting legal rules may come from U.S. federal law, the laws of U.S. states, or the laws of other countries.

Unified interface

A unified interface (management interface and APIs) for infrastructure and application services (when using PaaS) provides a more comprehensive view and better management compared to the traditional disparate systems and devices (load balancers, servers, network devices, firewalls, ACLs, etc.), which are often managed by different groups. This creates opportunities to reduce security failures due to lack of communication or full- stack visibility.

Quantitative analysis

ALE = SLE × ARO SLE = AVxEF EF = % ARO= decimal

Cloud Computing Accounting Software

Accounting software that is hosted on remote servers.

Control

Acts as a mechanism to restrict a list of possible actions down to allowed or permitted actions.

ISO/IEC 27018

Address the privacy aspects of cloud computing for consumers and is the first international set of privacy controls in the cloud.

ISO 28000

Addresses risks in a supply chain.

Process

Administrative control long as it has the word "process" in it.

Health Insurance Portability and Accountability Act of 1996 (HIPAA)

Adopt national standards for electronic healthcare transactions and national identifiers for providers, health plans, and employers. Protected Health information can be stored via cloud computing under HIPAA.

Edge Network

An edge server often serves as the connection between separate networks. A primary purpose of a CDN edge server is to store content as close as possible to a requesting client machine, thereby reducing latency and improving page load times. An edge server is a type of edge device that provides an entry point into a network. Other edges devices include routers and routing switches. Edge devices are often placed inside Internet exchange points (IxPs) to allow different networks to connect and share transit. If a network wants to connect to another network or the larger Internet, it must have some form of bridge in order for traffic to flow from one location to another. Hardware devices that creates this bridge on the edge of a network are called edge devices.

ESCROW AGREEMENT

An escrow is a contractual arrangement in which a third party receives and disburses money or documents for the primary transacting parties, with the disbursement dependent on conditions agreed

Patching

Automated - Fast sequence compared to manual. Manual - Overseen by admins but problems can be identified faster.

hypervisor level. - type 1

Bare metal like Softlayer. • Configure hypervisors to isolate virtual machines from each other.

ENISA, the Cloud Certification Schemes List (CCSL)

CBK...pdf...page 453

Cloud Service Platform (CSP)

Company that provides cloud-based platform, infrastructure, application or storage services to other organizations or individuals, usually for a fee.

Layer - Service

Defines the basic services provided by cloud providers.

STRIDE Threat Model

Derived from an acronym for the following six threat categories; Spoofing identity, Tampering with data, Repudiation, Information disclosure, Denial of service, Elevation of privilege

ISO 31000:2009

Design implementation and mgmt.

Cross Training

Resiliency technique that can help reduce the possible loss of functional capabilities during contingency operations.

risk-management process

Has four components: ■ Framing risk (how organizations assess, respond to, and monitor risk) ■ Assessing risk ■ Responding to risk ■ Monitoring risk

Cloud Architect

He or she will determine when and how a private cloud meets the policies and needs of an organization's strategic goals and contractual requirements (from a technical perspective).

ISO IEC 27001:2013

Help organizations to establish and maintain an ISMS. An ISMS is a set of interrelated elements that organizations use to manage and control information security risks and to protect and preserve the confidentiality, integrity, and availability of information.

Training Types

Initial, recurring and refresher.

Data Fragmentation

Involves splitting a data set into smaller fragments (or shards), and distributing them across a large number of machines.

NIST SP 800-53

Its primary goal and objective is to ensure that appropriate security requirements and security controls are applied to all U.S. Federal Government information and information management systems.

CSA STAR program

Level 3 (highest) - Continuous monitoring Level 2 - External 3rd Party Attestation Level 1 (lowest) - Self assessment

NIST 800-92

Log mgmt

Trademark

Logo, symbol, phrases and color scheme.

Host Intrusion Detection Systems (HIDS)

Monitors the inbound and outbound packets from the device only and will alert the user or administrator if suspicious activity is detected.

Policies

Organization ■ Reputational damage ■ Regulatory and legal consequences ■ Misuse and abuse of systems and resources ■ Financial loss ■ Irretrievable loss of data Functional: ■ Information security policy ■ Information technology policy ■ Data classification policy ■ Acceptable usage policy ■ Network security policy ■ Internet use policy ■ Email use policy ■ Password policy ■ Virus and spam policy ■ Software security policy ■ Data backup policy ■ Disaster recovery (DR) policy ■ Remote access policy ■ Segregation of duties policy ■ Third-party access policy ■ Incident response and management policy ■ Human resources security policy ■ Employee background checks ■ Legal compliance guidelines

Node

Physical connection.

Patent

Protects process, handled by USPTO

Data analytics modes

Real time analytics, data mining, an agile business intelligence

Electronic Discovery. (eDiscovery)

Refers to any process in which electronic data is sought, located, secured, and searched with the intent of using it as evidence in a civil or criminal legal case. involves the identification, collection, processing, analysis, and production of Electronically Stored Information (ESI) in the discovery phase of litigation

Accounting Group (AICPA)

Responsible for GAAP aka Generally Accepted Accounting Practices.

Cloud Operator

Responsible for daily operational tasks and duties that focus on cloud maintenance and monitoring activities.

Cloud Storage Adminstrator

Responsible for mapping, segregation, bandwidth, and reliability of storage volumes.

Cloud Service Manager

Responsible for policy design, biz agreement, pricing model, and some elements of the SLA.

SAS70

Retired.

Gap benefit analysis

Resource pooling: Resource sharing is essential to the attainment of significant cost savings when adopting a cloud computing strategy. This is often coupled with pooled resources being used by different consumer groups at different times. ■■ Shift from CapEx to OpEx: The shift from capital expenditure (CapEx) to operational expenditure (OpEx) is seen as a key factor for many organizations as their requirement to make significant purchases of systems and resources is minimized. Given the constant evolution of technology and computing power, memory, capabilities, and functionality, many traditional systems purchased lose value almost instantly. ■■ Factor in time and efficiencies: Given that organizations rarely acquire used technology or servers, almost all purchases are of new and recently developed technology. But it's not just technology investment savings. Time and efficiencies achieved can be the greatest savings achieved when utilizing cloud computing. ■■ Include depreciation: When you purchase a new car, the value deteriorates the moment you drive the car off the showroom floor. The same applies for IT, only with newer and more desirable cars, technologies, and models being released every few months or years. Using this analogy clearly highlights why so many organizations are now opting to lease cloud services as opposed to constantly investing in technologies that become outdated in relatively short periods. ■■ Reduction in maintenance and configuration time: Remember all those days, weeks, months, and years spent maintaining, operating, patching, updating, supporting, engineering, rebuilding, and generally making sure everything needed was done to the systems and applications required by the business users? Well, given that the CSP now handles a large portion of those duties (if not all— depending on which cloud service you are using), the ability to free up, utilize, and reallocate resources to other technology or related tasks could prove to be invaluable. ■■ Shift in focus: Technology and business personnel being able to focus on the key elements of their role, instead of the daily firefighting and responding to issues and technology components, comes as a welcome change to those professionals serious about their functions. ■■ Utilities costs: Outside of the technology and operational elements, from a utilities cost perspective, massive savings can be achieved with the reduced requirement for power, cooling, support agreements, data center space, racks, cabinets, and so on. Large organizations that have migrated big portions of the data center components to cloud-based environments have reported tens of thousands to hundreds of thousands in direct savings from the utilities elements. Green IT is very much at the fore of many global organizations, and cloud computing plays toward that focus in a strong way. ■■ Software and licensing costs: Software and relevant licensing costs present a major cost saving as well because you only pay for the licensing used versus the bulk or enterprise licensing levels of traditional non-cloud-based infrastructure models. ■■ Pay per usage: As outlined by the CapEx versus OpEx discussion earlier in this section, cloud computing gives businesses a new and clear benefit: pay per usage. In terms of traditional IT functions, when systems and infrastructure assets were acquired, they were seen as a "necessary or required spend" for the organization; however, with cloud computing, they can now be monitored, categorized, and billed to specified functions or departments based on usage. This is a significant win and driver for IT departments because it releases pressure to reduce spending and allows for billing of usage for relevant cost bases directly to those, as opposed to absorbing the costs themselves as a business requirement. With departments and business units now able to track costs and usage, it's easy to work out the amount of money spent versus the amount saved in traditional type computing. Sounds pretty straightforward, right? ■■ Other factors: What about new technologies, new or revised roles, legal costs, contract and SLA negotiations, additional governance requirements, training required, CSP interactions, and reporting? All these may impact and alter the price you see versus the price you pay, otherwise known as the total cost of ownership (TCO).

SSAE

SAS70 is now SSAE 16/SOC1

Contract Negotiation

SaaS - Can happen PaaS - Low chance IaaS-Rare

Information security management system (ISMS)

Set of policies and procedures for systematically managing an organization's sensitive data. The goal of an ISMS is to minimize risk and ensure business continuity by pro-actively limiting the impact of a security breach.

Copyright

Tangible expression of creative works.

Cloud Carrier

The intermediary who provides connectivity and transport of cloud services between cloud providers and cloud consumers.

General Data Protection Regulation (GDPR)

The new GDPR is directly binding on any corporation that processes the data of EU citizens, and will be adjudicated by the data supervisory authorities or the courts of the member states that have the closest relationship with the individuals or the entities on both sides of the dispute. Cross-border Data Transfer Restrictions: The transfer of personal data outside the EU/EEA to a country that does not offer a similar range of protection of personal data and privacy rights is prohibited. To prove that it will be offering the "adequate level of protection" required, a company may use one of several methods, such as executing Standard Contractual Clauses (SCC), signing up to the EU-US Privacy Shield, obtaining certification of Binding Corporate Rules (BCRs), or complying with an approved industry Code of Conduct or approved certification mechanism. In rare cases, the transfer might be effected with the explicit, informed, consent of the data subject, or if other exceptions apply.

Data Archiving

Tips to consider: Archive location, backup process, format of data.

BCP/DR

Tips: 1. Distributed, remote processing and storage of data. 2. Fast replication. 3. Regular backups offered by cloud providers.

IaaS Security

VM Attacks- Active or inactive VMs prone to attacks. Traditional ones that effect physical servers. Virtual Network-Contains virtual switch software that controls the movement of traffic between the virtual network interface cards (NICs) of the installed VMs and the physical host. [see "network isolation" for more tips]. Hypervisor Attacks-The intrigued is on lower levels in the system for hackers, compromising the hypervisor will allow one to potentially attack the other VMs installed on the system. VM-Based Rootkit-These rootkits act by inserting a malicious hypervisor on the fly or modifying the installed hypervisor to gain control over the host workload. Virtual Switch Attacks-Vulnerable to wide range of layer 2 attacks, manipulation or modification of the virtual switch's configuration, VLANs and trust zones, ARP tables [requires more bandwidth then a physical switch, if a cable gets messed up it could take down several VMs, important to use network isolation]. DoS attacks- (external threats or internal threats which also could be a misconfiguration issue) Allows a single VM to consume all available resources. Hypervisors prevent 100% utilization of resources and a properly configured system would show when resources are properly allocated, if not if would to be reboot of the VM. Colocation-Multiple VMs residing on a single server and sharing the same resources, this increases the attack surface and the risk of VM to VM or VM to hypervisor compromise. Multitenancy-Different users in the cloud share the same apps and physical hardware to run their VMs, this could sometimes lead to information leakage. Loss of Control-Users not aware of the location of their data and services, whereas CPS's host and run VMs without knowing the contents. Network Topology-Cloud architecture is dynamic due to the fact that existing workloads change over time because of the creation and removal of VMs. In addition,the abilities of VMs to migrate from one host to another leads to the rise of nonpredefined network topologies. Logical Network Separation-isolation alongside the hypervisor remains a key and fundamental activity to reduce external sniffing, monitoring, and interception of communications and others within therelevant segments. VLANs, NATs, bridging, and segregation provide viable options to ensure the overall security posture remains strong, flexible, and constant, as opposed to other mitigation controls that may affect the overall performance. No physical endpoints-Due to server and network virtualization, the number of physical endpoints (such as switches, servers, and NICs) is reduced. These physical endpoints are traditionally used in defining, managing, and protecting IT assets. Single Point of Access-Hosts have a limited number of access points (NICs) available to all VMs. This represents a critical security vulnerability: compromising these access points opens the door to compromise the VMs, the hypervisor, or the virtual switch.

Storage - IaaS

Volume or object. • Volume storage (block storage) Includes volumes/data stores attached to IaaS instances, usually a virtual hard drive. Should provide redundancy • Object storage: Example: Dropbox. Used for write-once, read many; not suitable for applications like databases • Independent of virtual machine • Because of varying laws and regulations, customers should always know where their physical data is stored and is stored in compliance with their needs

CIS/SANS Critical Security Controls

actively manage (inventory, track, and correct) all software on the network so that only authorized software is installed and can execute and that unauthorized and unmanaged software is found and prevented from installation or execution.

security event

an event that has a negative outcome affecting the conf, integrity, or availability of an org's data

Request for proposal (RFP)

is a document that solicits proposal, often made through a bidding process, by an agency or company interested in procurement of a commodity, service, or valuable asset, to potential suppliers to submit business proposals.

microsegmentation

micro-segmentation is used to segregate isolated virtual network to help reduce attacks blast radius and limit the expansion of possible compromise.

Oversubscription

occurs when more users are connected to a system than can be fully supported at the same time. Networks and servers are almost always designed with some amount of oversubscription with the assumption that users do not all need the service simultaneously. If they do, delays are certain and outages are possible. Oversubscription is permissible on general-purpose LANs, but you should not use an oversubscribed configuration for iSCSI. Here's best practice: ■■ To have a dedicated local area network (LAN) for iSCSI traffic ■■ Not to share the storage network with other network traffic such as management, fault tolerance, or vMotion/Live Migration

Virtual Machine - Live Migration

the ability to transition a virtual machine between hypervisors on different host computers without halting the guest operating system, and other features provided by virtual machine monitor environments to facilitate systems management, also increase software size and complexity and potentially add other areas to target in an attack. During live VM migration, data is in clear text, allowing man-in-the-middle attack on a VM‟s hypervisor. To deal with this problem, there is a need to encrypt both the VM storage data as well as using encryption algorithm to secure the control messages of live migration protocol.

DATABASE SECURITY

• DAM Database Activity Monitoring that captures and records all SQL activity in real time or near real time. Can prevent malicious commands from executing on a server • FAM File Activity Monitoring that monitors and records all activity for a specific file repository and can generate alerts on policy violations • DLP Data Loss Prevention systems

PERIMETER SECURITY

• Deter • Detect • Delay • Deny

Datacenter Location

- Could be a challenge to getting redundant power and communications utility connections if located in the middle of a rural area that is served by only one ISP etc.

compute abstraction types

- Virtual Machines = instances in cloud computing since they are created (or cloned) off a base image. The Virtual Machine Manager (hypervisor) abstracts an operating system from the underlying hardware. Modern hypervisors can tie into underlying hardware capabilities now commonly available on standard servers (and workstations) to reinforce isolation while supporting high-performance operations. - Containers = code execution environments that run within an operating system (for now), sharing and leveraging resources of that operating system. [must always include the execution environment, orchestration & scheduling, a repository. - Platform based workloads = workloads running on a shared platform that aren't virtual machines or containers, such as logic/procedures running on a shared database platform. - Serverless computing = situation where the cloud user doesn't manage any of the underlying hardware or virtual machines, and just accesses exposed functions.

Storage Encryption

Basic Storage-Level Encryption Where storage-level encryption is utilized, the encryption engine is located on the storage management level, with the keys usually held by the CSP. The engine encrypts data written to the storage and decrypts it when exiting the storage (that is, for use). Volume Storage Encryption Volume storage encryption requires that the encrypted data reside on volume storage. This is typically done through an encrypted container, which is mapped as a folder or volume. Two Methods: ■■ Instance-based encryption: When instance-based encryption is used, the encryption engine is located on the instance itself. Keys can be guarded locally but should be managed external to the instance. ■■ Proxy-based encryption: When proxy-based encryption is used, the encryption engine is running on a proxy instance or appliance. Object Storage Encryption (dropbox etc. therefore server-side storage-level encryption) ■■ File-level encryption: Examples include IRM and DRM solutions, both of which can be effective when used in conjunction with file hosting and sharing services that typically rely on object storage. The encryption engine is commonly implemented at the client side and preserves the format of the original file. ■■ Application-level encryption: The encryption engine resides in the application that is utilizing the object storage. It can be integrated into the application component or by a proxy that is responsible for encrypting the data before going to the cloud. The proxy can be implemented on the customer gateway or as a service residing at the external provider. Database Encryption For database encryption, the following options should be understood: ■■ File-level encryption: Database servers typically reside on volume storage. For this deployment, you are encrypting the volume or folder of the database, with the encryption engine and keys residing on the instances attached to the volume. External file system encryption protects from media theft, lost backups, and external attack but does not protect against attacks with access to the application layer, the instances OS, or the database itself. ■■ Transparent encryption: Many database management systems contain the ability to encrypt the entire database or specific portions, such as tables. The encryption engine resides within the database, and it is transparent to the application. Keys usually reside within the instance, although processing and managing them may also be offloaded to an external Key Management Service (KMS). This encryption can provide effective protection from media theft, backup system intrusions, and certain database and application-level attacks. ■■ Application-level encryption: In application-level encryption, the encryption engine resides at the application that is utilizing the database.

Serverless Computing

CSP is responsbile for the levels below the serverless level, the customer is in charge of configurating the tool. Serverless computing is the extensive use of certain PaaS capabilities to such a degree that all or some of an application stack runs in a cloud provider's environment without any customer-managed operating systems, or even containers. "Serverless computing" is a bit of a misnomer since there is always a server running the workload someplace, but those servers and their configuration and security are completely hidden from the cloud user. The consumer only manages settings for the service, and not any of the underlying hardware and software stacks. Serverless includes services such as: • Object storage • Cloud load balancers • Cloud databases • Machine learning • Message queues • Notification services • Code execution environments (These are generally restricted containers where a consumer runs uploaded application code.) • API gateways • Web servers Key Issues: • Serverless places a much higher security burden on the cloud provider. Choosing your provider and understanding security SLAs and capabilities is absolutely critical. • Using serverless, the cloud user will not have access to commonly-used monitoring and logging levels, such as server or network logs. Applications will need to integrate more logging, and cloud providers should provide necessary logging to meet core security and compliance requirements. • Although the provider's services may be certified or attested for various compliance requirements, not necessarily every service will match every potential regulation. Providers need to keep compliance mappings up to date, and customers need to ensure they only use services within their compliance scope. • There will be high levels of access to the cloud provider's management plane since that is the only way to integrate and use the serverless capabilities. • Can dramatically reduce attack surface and pathways and integrating serverless components may be an excellent way to break links in an attack chain, even if the entire application stack is not serverless. • Any vulnerability assessment or other security testing must comply with the provider's terms of service. Cloud users may no longer have the ability to directly test applications, or must test with a reduced scope, since the provider's infrastructure is now hosting everything and can't distinguish between legitimate tests and attacks. • Incident response may also be complicated and will definitely require changes in process and tooling to manage a serverless-based incident

Hardening

Closed unused ports, delete services that aren't needed, update & patch system, password policy, remove default passwords, limit physical access, strictly control admin access.....

API Integration

Cloud providers and platforms will also often offer Software Development Kits (SDKs) and Command Line Interfaces (CLIs) to make integrating with their APIs.

Composite Services.

Cloud services themselves can be composed through nesting and layering with other cloud services. For example, a public SaaS provider could build its services upon those of a PaaS or IaaS cloud. The level of availability of the SaaS cloud would then depend on the availability of those services. If the percent availability of a support service drops, the overall availability suffers proportionally

Data Lifecycle

Create > Store > Use > Share > Archive > Destroy The next step identifies the functions that can be performed with the data, by a given actor (person or system) and a particular location. Functions: There are three things we can do with a given datum: • Read. View/read the data, including creating, copying, file transfers, dissemination, and other exchanges of information. • Process. Perform a transaction on the data; update it; use it in a business processing transaction, etc. • Store. Hold the data (in a file, database, etc.).

Federal Information Processing Standards (FIPS)

FIPS 199. This standard, entitled Standards for Security Categorization of Federal Information and Information Systems, provides a common framework and method for categorizing information and information systems to ensure that adequate levels of information security are provided, which are commensurate with the level of risk. The resulting security categorization feeds into other activities such as security control selection, privacy impact analysis, and critical infrastructure analysis. FIPS 200. This standard, entitled Minimum Security Requirements for Federal Information and Information Systems, directs agencies to meet the identified minimum security requirements for federal information and information systems by selecting the appropriate security controls and assurance requirements described in NIST Special Publication (SP) 800-53 Revision 3.

Gramm-Leach-Bliley Act (GLBA)

Federal law enacted in the United States to control the ways that financial institutions deal with the private information of individuals. -Security and Privacy matters. -Creation of formal info sec. program (requirement passed down).

SOAP [performs operations through a more standardized set of messaging patterns, only uses XML]

For instance, if you need more robust security, SOAP's support for WS-Security can come in handy. It offers some additional assurances for data privacy and integrity. It also provides support for identity verification through intermediaries rather than just point-to-point, as provided by SSL (which is supported by both SOAP and REST). Another advantage of SOAP is that it offers built-in retry logic to compensate for failed communications. REST, on the other hand, doesn't have a built-in messaging system. If a communication fails, the client has to deal with it by retrying. There's also no standard set of rules for REST. This means that both parties (the service and the consumer) need to understand both content and context. Other benefits of SOAP include: SOAP's standard HTTP protocol makes it easier for it to operate across firewalls and proxies without modifications to the SOAP protocol itself. But, because it uses the complex XML format, it tends to be slower compared to middleware such as ICE and COBRA. Additionally, while it's rarely needed, some use cases require greater transactional reliability than what can be achieved with HTTP (which limits REST in this capacity). If you need ACID-compliant transactions, SOAP is the way to go. In some cases, designing SOAP services can actually be less complex compared to REST. For web services that support complex operations, requiring content and context to be maintained, designing a SOAP service requires less coding in the application layer for transactions, security, trust, and other elements. SOAP is highly extensible through other protocols and technologies. In addition to WS-Security, SOAP supports WS-Addressing, WS-Coordination, WS-ReliableMessaging, and a host of other web services standards, a full list of which you can find on W3C.

Business Requirements

Gather info about: inventory, value of assets, and criticality.

Power - Datacenter

Generators -12 Hours, please note that the generator switch should enable backup power before the UPS is drained. UPS - May use software for graceful shutdown. Another feature not mentioned is that a UPS can provide line conditioning aka adjusting power so that it's optimized for devices. Propane liquid - Does not spoil

ITIL

Group of documents that are used in implementing a framework for IT service management.

NIST 800-37

Guide for implementing the Risk Mgmt Framework (RNF)

USA - State Department/Commerce

Handles Export Administration Regulations (EAR), this serves as a "control" for technology exports. International traffic arms in arms regulations (ITAR) is a "PROGRAM" within these agencies.

Trusting Cloud Provider Requires

Ensure that service arrangements have sufficient means to allow visibility into the security and privacy controls and processes employed by the cloud provider, and their performance over time. Establish clear, exclusive ownership rights over data. Institute a risk management program that is flexible enough to adapt to the constantly evolving and shifting risk landscape for the lifecycle of the system. Continuously monitor the security state of the information system to support on-going risk management decisions.

Data fluidity

Enterprises are often required to prove that their security compliance is in accord with regulations, standards, and auditing practices, regardless of the location of the systems at which the data resides. Data is fluid in cloud computing and may reside in on-premises physical servers, on-premises VMs, or off-premises VMs running on cloud computing resources. This requires some rethinking on the part of auditors and practitioners alike.

Sarbanes Oxley Act (SOX)

Legislation enacted to protect shareholders and the general public from accounting errors and fraudulent practices in the enterprise. What led to SOX: bad BOD oversight, lack of independent controls, poor financial controls....

Online Backup

Leverages the Internet and cloud computing to create an attractive off-site storage solution with little hardware requirements for any business of any size.

Cloud Testing

Load and performance testing conducted on the applications and services provided via cloud computing — particularly the capability to access these services — in order to ensure optimal performance and scalability under a wide variety of conditions.

Cloud App (Cloud Application)

Short for cloud application, cloud app is the phrase used to describe a software application that is never installed on a local computer. Instead, it is accessed via the Internet.

Baseline

Should cover as many systems in the organization as possible. Note: Deviations from baseline should be documented. Note 2: Baselines should only change if numerous change requests come in.

OS Monitoring (Performance)

Should include: Disk space, Disk I/O and CPU usage...

micro-service architectures

Since cloud doesn't require the consumer to optimize the use of physical servers, a requirement that often results in deploying multiple application components and services on a single system, developers can instead deploy more, smaller virtual machines, each dedicated to a function or service. This reduces the attack surface of the individual virtual machines and supports more granular security controls.

Networking Model

Traditional Networking Model The traditional model is a layered approach with physical switches at the top layer and logical separation at the hypervisor level. This model allows for the use of tradi- tional network security tools. There may be some limitation on the visibility of network segments between VMs. Converged Networking Model The converged model is optimized for cloud deployments and utilizes standard perim- eter protection measures. The underlying storage and IP networks are converged to maximize the benefits for a cloud workload. This method facilitates the use of virtualized security appliances for network protection. You can think of a converged network model as being a super network, one that is capable of carrying a combination of data, voice, and video traffic across a single network that is optimized for performance.

Privacy Level Agreement (PLA).

The PLA, as defined by the CSA, does the following: ■■ Provides a clear and effective way to communicate the level of personal data protection offered by a service provider ■■ Works as a tool to assess the level of a service provider's compliance with data protection legislative requirements and leading practices ■■ Provides a way to offer contractual protection against possible financial damages due to lack of compliance

PII

The USA doesn't have a federal privacy law protecting rights of citizens unlike other parts of the world.

iSCSI Implementation Considerations

The following items are the security considerations when implementing iSCSI: ■■ Private network: iSCSI storage traffic is transmitted in an unencrypted format across the LAN. Therefore, it is considered a best practice to use iSCSI on trusted networks only and to isolate the traffic on separate physical switches or to leverage a private VLAN. All iSCSI-array vendors agree that it is good practice to isolate iSCSI traffic for security reasons. This means isolating the iSCSI traffic on its own separate physical switches or leveraging a dedicated VLAN (IEEE 802.1Q).4 ■■ Encryption: iSCSI supports several types of security. IP Security (IPSec) is used for security at the network or packet-processing layer of network communication. Internet Key Exchange (IKE) is an IPSec standard protocol used to ensure security for virtual private networks (VPNs). ■■ Authentication: Numerous authentication methods are supported with iSCSI: ■■ Kerberos: A network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server have used Kerberos to prove their identities, they can encrypt all their communications to ensure privacy and data integrity as they go about their business. ■■ Secure remote password (SRP): SRP is a secure password-based authentication and key-exchange protocol. SRP exchanges a cryptographically strong secret as a by-product of successful authentication, which enables the two parties to communicate securely. ■■ Simple public-key mechanism (SPKM1/2): Provides authentication, key establishment, data integrity, and data confidentiality in an online distributed application environment using a public-key infrastructure. SPKM can be used as a drop-in replacement by any application that uses security services through Generic Security Service Application Program Interface (GSSAPI) calls. The use of a public-key infrastructure allows digital signatures supporting nonrepudiation to be employed for message exchanges.6 ■■ Challenge handshake authentication protocol (CHAP): Used to periodically verify the identity of the peer using a three-way handshake. This is done upon initial link establishment and may be repeated anytime after the link has been established. The following are the steps involved in using CHAP:7 1. After the link establishment phase is complete, the authenticator sends a challenge message to the peer. 2. The peer responds with a value calculated using a one-way hash function. 3. The authenticator checks the response against its own calculation of the expected hash value. If the values match, the authentication is acknowledged; otherwise the connection should be terminated. 4. At random intervals, the authenticator sends a new challenge to the peer and repeats steps 1 to 3.

storage controllers configuration

The following should be considered when configuring storage controllers: 1. Turn off all unnecessary services, such as web interfaces and management services that will not be needed or used. 2. Validate that the controllers can meet the estimated traffic load based on vendor specifications and testing (1Gbps | 10Gbps | 16Gbps | 40Gbps). 3. Deploy a redundant failover configuration such as a NIC team. 4. Consider deploying a multipath solution. 5. Change default administrative passwords for configuration and management access to the controller.

NIST "Framework for Improving Critical Infrastructure Cybersecurity.

The framework is composed of three parts: ■■ Framework Core: Cybersecurity activities and outcomes divided into five functions: identify, protect, detect, respond, and recover ■■ Framework Profile: To help the company align activities with business requirements, risk tolerance, and resources ■■ Framework Implementation Tiers: To help organizations categorize where they are with their approach Building from those standards, guidelines, and practices, the framework provides a common taxonomy and mechanism for organizations

Orchestration

The goal of cloud orchestration is to automate the configuration, coordination, and management of software and software interactions. The process involves automating the workflows required for service delivery.

Storage Clusters

The use of two or more storage servers working together to increase performance, capacity, or reliability. Clustering distributes workloads to each server, manages the transfer of workloads between servers, and provides access to all files from any server regardless of the physical location of the file. Storage clusters should be designed to do the following: ■ Meet the required service levels as specified in the SLA ■ Provide for the ability to separate customer data in multi-tenant hosting environments ■ Securely store and protect data through the use of availability, integrity, and confidentiality (AIC) mechanisms, such as encryption, hashing, masking, and multi-pathing.

Multifactor authentication (MFA)

There are multiple options for MFA, including: • Hard tokens are physical devices that generate one time passwords for human entry or need to be plugged into a reader. These are the best option when the highest level of security is required. • Soft tokens work similarly to hard tokens but are software applications that run on a phone or computer. Soft tokens are also an excellent option but could be compromised if the user's device is compromised, and this risk needs to be considered in any threat model. • Out-of-band Passwords are text or other messages sent to a user's phone (usually) and are then entered like any other one time password generated by a token. Although also a good option, any threat model must consider message interception, especially with SMS. • Biometrics are growing as an option, thanks to biometric readers now commonly available on mobile phones. For cloud services, the biometric is a local protection that doesn't send biometric information to the cloud provider and is instead an attribute that can be sent to the provider. As such the security and ownership of the local device needs to be considered.

Cloud jump kit

These are the tools needed to investigate in a remote location (as with cloud- based resources).

Hybrid cloud

This cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds). ■■ Retain ownership and oversight of critical tasks and processes related to technology. ■■ Reuse previous investments in technology within the organization. ■■ Control the most critical business components and systems. ■■ Act as a cost-effective means of fulfilling noncritical business functions (utilizing public cloud components). CPS and customer are not on the same level of trust from a network perspective which is a risk. Bastion or transit network: Can be used to segregate both networks and keep a safe hybrid connection. Stands in the middle.

Private cloud

This cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on- or off-premises. ■■ Increased control over data, underlying systems, and applications ■■ Ownership and retention of governance controls ■■ Assurance over data location and removal of multiple jurisdiction legal and compliance requirements Virtual Private Cloud: Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, ...

Community cloud

This cloud infrastructure is provisioned for exclusive use by a specific community of organizations with shared concerns (e.g., mission, security requirements, policy, and compliance considerations).

Public cloud

This cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider. ■■ Easy and inexpensive setup because the provider covers hardware, application, and bandwidth costs ■■ Streamlined and easy-to-provision resources ■■ Scalability to meet customer needs ■■ No wasted resources—pay as you consume

Tort law:

This is a body of rights, obligations, and remedies that sets out reliefs for persons suffering harm as a result of the wrongful acts of others. These laws set out that the individual liable for the costs and consequences of the wrongful act is the individual who committed the act as opposed to the individual who suffered the consequences. Tort actions are not dependent on an agreement between the parties to a lawsuit. Tort law serves four objectives: ■ It seeks to compensate victims for injuries suffered by the culpable action or inaction of others. ■ It seeks to shift the cost of such injuries to the person or persons who are legally responsible for inflicting them. ■ It seeks to discourage injurious, careless, and risky behavior in the future. ■ It seeks to vindicate legal rights and interests that have been compromised, diminished, or emasculated.

Converged networking model

Optimized for cloud deployments and utilizes standard perimeter protection measures. The underlying storage and IP networks are converged to maximize the benefits for a cloud workload.

Storage - PaaS

PaaSutilizes the following data storage types: • Structured: Highly organized, such that inclusion in a relational database is seamless and readily searchable • Unstructured: Information that doesn't reside in a traditional row-column database—text, multimedia content, email, etc.

Logical design

Part of the design phase of the SDLC in which all functional features of the system chosen for development in analysis are described independently of any computer platform

Datacenter Design

Physical security plans should look at: Perimeter, vehicles approach and fire suppression. Temp and humidity HVAC Environmental Design Air Mgmt Cable Mgmt Aisle Strategy: Hot Version = Hot <->Cold<->Hot Cold Version = Cold <->Hot<->Cold Tiers: ■■ Tier I: Basic Data Center Site Infrastructure ■■ Tier II: Redundant Site Infrastructure Capacity Components ■■ Tier III: Concurrently Maintainable Site Infrastructure ■■ Tier IV: Fault-Tolerant Site Infrastructure Standards: ■■ Building Industry Consulting Service International Inc. (BICSI): The ANSI/BICSI 002-2014 standard covers cabling design and installation. http://www.bicsi.org ■■ The International Data Center Authority (IDCA): The Infinity Paradigm covers data center location, facility structure, and infrastructure and applications. http://www.idc-a.org/ ■■ The National Fire Protection Association (NFPA): NFPA 75 and 76 standards specify how hot or cold aisle containment is to be carried out, and NFPA standard 70 requires the implementation of an emergency power-off button to protect first responders in the data center in case of emergency. http://www.nfpa.org/ Physical location of CSP should be evaluated for location in relation to: • Regions with a high rate of natural disasters (flood, landslides, seismic activity, etc.) • Regions of high crime, social/political unrest • Frequency of inaccessibility

Incident Response

Preperation > Detection&Analysis > Containment/Eradication > Post-Mortem.

Security governance

Security governance is the overall approach of management toward the organization's risk management processes. The primary issue to remember when governing cloud computing is that an organization can never outsource responsibility for governance, even when using external providers. This is always true, cloud or not, but is useful to keep in mind when navigating cloud computing's concepts of shared responsibility models. What helps: -Contracts - Supplier assessments -Compliance reporting

Logging

Security logging/monitoring is more complex in cloud computing: • IP addresses in logs won't necessarily reflect a particular workflow since multiple virtual machines may share the same IP address over a period of time, and some workloads like containers and serverless may not have a recognizable IP address at all. Thus, you need to collect some other unique identifiers in the logs to be assured you know what the log entries actually refer to. These unique identifiers need to account for ephemeral systems, which may only be active for a short period of time. • Logs need to be offloaded and collected externally more quickly due to the higher velocity of change in cloud. You can easily lose logs in an auto-scale group if they aren't collected before the cloud controller shuts down an unneeded instance. • Logging architectures need to account for cloud storage and networking costs. For example, sending all logs from instances in a public cloud to on-premises Security Information and Event Management (SIEM) may be cost prohibitive, due to the additional internal storage and extra Internet networking fees.

Serverless Computing

Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It is a form of utility computing. Serverless computing still requires servers, hence it's a misnomer. The name "serverless computing" is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.

Sherwood Applied Business Security Architecture

Sherwood Applied Business Security Architecture (SABSA)11 includes the following components, which can be used separately or together: ■■ Business Requirements Engineering Framework ■■ Risk and Opportunity Management Framework ■■ Policy Architecture Framework ■■ Security Services-Oriented Architecture Framework ■■ Governance Framework ■■ Security Domain Framework ■■ Through-Life Security Service Management and Performance Management Framework

Web Security Gateway

Used to intercept live traffic before it gets into a company's premises. Cloud customers use this to proxy web traffic to a web security gateway offered as a cloud service to protect against malicious web traffic.

Remote Kill Switch

Useful to reduce the risks to a cloud environment, resulting in loss or theft of a device used for remote access.

Data dispersion (sometimes also known as data fragmentation of bit splitting)

Uses parity bits (storage resiliency), data chunks and encrypted chunks of data. Data dispersion is similar to a RAID solution, but it is implemented differently. Storage blocks are replicated to multiple physical locations across the cloud. In a private cloud, you can set up and configure data dispersion yourself. Users of a public cloud do not have the capability to set up and configure data dispersion, although their data may benefit from the CSP using data dispersion.

immutable VMs

VM running from a master image: -Security testing baked in. -Free patching dependencies -Consistant image creation.

The doctrine of the proper law

When a conflict of laws occurs, this determines in which jurisdiction the dispute will be heard, based on contractual language professing an express selection or a clear intention through a choice-of-law clause. If there is not an express selection stipulated, implied selection may be used to infer the intention and meaning of the parties from the nature of the contract and the circumstances involved.

Doctrine of the Proper Law

When a conflict of laws occurs, this determines in which jurisdiction the dispute will be heard.

disaster recovery strategies

When failures occur (critical that all the areas below are evaluated before sign off): 1.On-prem to cloud 2.Cloud to same cloud 3. Cloud to different cloud provider Vocabulary Check: ■■ The recovery point objective (RPO) helps determine how much information must be recovered and restored. Another way of looking at RPO is to ask yourself, "how much data can the company afford to lose?" ■■ The recovery time objective (RTO) is a time measure of how fast you need each system to be up and running in the event of a disaster or critical failure.

Vendor lock-out

When the cloud provider goes out of business, is acquired by another interest, or ceases operation for any reason.

Cloud Controls Matrix (CCM)

Which lists cloud security controls and maps them to multiple security and compliance standards. The CCM can also be used to document security responsibilities

Ancillary Data.

While the focus of attention in cloud computing is mainly on protecting application data, cloud providers also hold significant details about the accounts of cloud consumers that could be compromised and used in subsequent attacks. Payment information is one example; other, more subtle types of information, can also be involved

EU General Data Protection Regulation 2012

Will introduce many significant changes for data processors and controllers. The following may be considered as some of the more significant changes: The concept of consent, Transfers Abroad, The right to be forgotten, Establishment of the role of the "Data Protection Officer", Access Requests, Home State Regulation, Increased Sanctions

Cloud

1. Broad network access. 2. Resource pooling (CSP shares them amongst clients) 3. Rapid elasticity 4. On-demand self service 5. Measured service (provider measures or monitors the provision of services for various reasons, including billing, effective use of resources, or overall predictive planning.). 6. Cost 7. Potential risk reduction

Software Development Lifecycle Process for a Cloud Environment

1. planning and requirements analysis. 2. defining 3. designing 4. developing 5. testing Once complete can be tested with secure operations tools such as: Puppet -configuration management system that allows you to define the state of your IT infrastructure and then automatically enforces the correct state Chef- automate how you build, deploy, and manage yourinfrastructure. The Chef server stores your recipes as well as other configuration data..

Stored Communications Act

1995 - Old and needs updating badly...

Metastructure

: The protocols and mechanisms that provide the interface between the infrastructure layer and the other layers. The glue that ties the technologies and enables

Tort Law

A body of rights, obligations, and remedies that sets out reliefs for persons suffering harm as a result of the wrongful acts of others.

Criminal Law

A body of rules and statutes that defines conduct that is prohibited by the government and is set out to protect the safety and well-being of the public.

Software Defined Networking (SDN) More effective than VLAN in the cloud.

A broad and developing concept addressing the management of the various network components. The objective is to provide a control plane to manage network traffic on a more abstract level than through direct management of network components. SDN can offer much higher flexibility and isolation. For example, multiple segregated overlapping IP ranges for a virtual network on top of the same physical network. Implemented properly, and unlike standard VLANs, SDNs provide effective security isolation boundaries.

Hybrid Cloud Storage

A combination of public cloud storage and private cloud storage where some critical data resides in the enterprise's private cloud while other data is stored and accessible from a public cloud storage provider.

Domain Name System (DNS)

A hierarchical, distributed database that contains mappings of DNS domain names to various types of data, such as Internet Protocol (IP) addresses. DNS allows you to use friendly names, such as www.isc2.org, to easily locate computers and other resources on a TCP/IP-based network.

Domain Name System Security Extensions (DNSSEC)

A suite of extensions that adds security to the Domain Name System (DNS) protocol by enabling DNS responses to be validated. Specifically, DNSSEC provides origin authority, data integrity, and authenticated denial of existence.

Data label

date data was created, data owner, data of destruction, source, handling restrictions, jurisdiction, confidentiality levels, distribution limitations, access restrictions ......value of data not included!

entitlement matrix

determining access controls. Enforcement will vary based on cloud provider capabilities. During this process the customer should evaluate different secuirty claims and decide which standards should be applied to the apps and services being hosted with the CSP (support with app assessments). • Develop an entitlement matrix for each cloud provider and project, with an emphasis on access to the metastructure and/or management plane. identities to authorizations and any required attributes (e.g. user x is allowed access to resource y when z attributes have designated values). We commonly refer to a map of these entitlements as an entitlement matrix. Entitlements are often encoded as technical policies for distribution and enforcement. • Cloud users are responsible for maintaining the identity provider and defining identities and attributes. • These should be based on an authoritative source. • Distributed organizations should consider using cloud-hosted directory servers when on- premises options either aren't available or do not meet requirements. Here's a real-world cloud example. The cloud provider has an API for launching new virtual machines. That API has a corresponding authorization to allow launching new machines, with additional authorization options for what virtual network a user can launch the VM within. The cloud administrator creates an entitlement that says that users in the developer group can launch virtual machines in only their project network and only if they authenticated with MFA. The group and the use of MFA are attributes of the user's identity. That entitlement is written as a policy that is loaded into the cloud provider's system for enforcement.

events versus incidents

event is defined as a change of state that has sig- nificance for the management of an IT service or other CI. The term can also be used to mean an alert or notification created by an IT service, CI, or monitoring tool. Events often require IT operations staff to take actions and lead to incidents being logged. incident is defined as an unplanned interrup- tion to an IT service or a reduction in the quality of an IT service. You should have a detailed incident management plan that includes the following: ■ Definitions of an incident by service type or offering ■ Customer and provider roles and responsibilities for an incident ■ Incident management process from detection to resolution ■ Response requirements ■ Media coordination ■ Legal and regulatory requirements such as data breach notification Problem management The objective of problem management is to minimize the impact of problems on the organization by identifying the root cause of the problem at hand. Problem management plays an important role in the detection of and providing of solutions to problems (work- arounds and known errors) and prevents their reoccurrence. ■ A problem is the unknown cause of one or more incidents, often identified as a result of multiple similar incidents. ■ A known error is an identified root cause of a problem. ■ A workaround is a temporary way of overcoming technical difficulties (that is, incidents or problems).

Common Criteria for Information Technology Security Evaluation

gGal of CC certification is to ensure customers that the products they are buying have been evaluated and that a vendor-neutral third party has verified the vendor's claims ■■ Protection profiles: Define a standard set of security requirements for a specific type of product, such as a firewall, IDS, or unified threat management (UTM). Basically a document specifying security evaluation criteria to substantiate vendors' claims of a given family of information system products (a term used in Common Criteria) ■■ Security Target - A document specifying security evaluation criteria to substantiate the vendor's claims for the product's security properties (a term used in Common Criteria) ■■ The evaluation assurance levels (EALs): Define how thoroughly the product is tested. EALs are rated using a sliding scale from 1-7, with 1 being the lowest-level evaluation and 7 being the highest. ■■ EAL1: Functionally tested ■■ EAL2: Structurally tested ■■ EAL3: Methodically tested and checked ■■ EAL4: Methodically designed, tested, and reviewed ■■ EAL5: Semiformally designed and tested ■■ EAL6: Semiformally verified design and tested ■■ EAL7: Formally verified design and tested

Privacy Act

governs the collection, maintenance, use, and dissemination of information about individuals that is maintained in systems of records by federal agencies and can be retrieved by a personal identifier (e.g., name). It

Identity brokers

handle federating between identity providers and relying parties (which may not always be a cloud service). They can be located on the network edge or even in the cloud in order to enable web-SSO.

Hub and spoke

internal indentity provider/sources communicate with a central broker or repository that then servies as the identity provider for federation to cloud providers.

iso 27001:2013 Domains

internal information security controls system:..iso27002 = practice

Chaos Engineering

is often used to help build resilient cloud deployments. Since everything cloud is API-based, Chaos Engineering uses tools to selectively degrade portions of the cloud to continuously test business continuity.

The Open Group Architecture Framework (TOGAF)

is one of many frameworks available to the cloud security professional for developing an enterprise architecture. The following principles should be adhered to at all times for enterprise architecture: ■■ Define protections that enable trust in the cloud. ■■ Develop cross-platform capabilities and patterns for proprietary and open source providers. ■■ Facilitate trusted and efficient access, administration, and resiliency to the customer or consumer. ■■ Provide direction to secure information that is protected by regulations. ■■ Facilitate proper and efficient identification, authentication, authorization, administration, and auditability. ■■ Centralize security policy, maintenance operation, and oversight functions. ■■ Make access to information both secure and easy to obtain. ■■ Delegate or federate access control where appropriate. ■■ Ensure ease of adoption and consumption, supporting the design of security patterns. ■■ Make the architecture elastic, flexible, and resilient, supporting multitenant, multilandlord platforms. ■■ Ensure the architecture addresses and supports multiple levels of protection, including network, OS, and application security needs.

Infrastructure as code (IaC)

is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.

HYPERVISOR

• Allows multiple OS to share a single hardware host, with the appearance of each host having exclusive use of resources

Cloud Security Models

• Conceptual models or frameworks include visualizations and descriptions used to explain cloud security concepts and principles, such as the CSA logical model in this document. • Controls models or frameworks categorize and detail specific cloud security controls or categories of controls, such as the CSA CCM. • Reference architectures are templates for implementing cloud security, typically generalized (e.g.an IaaS security reference architecture). They can be very abstract, bordering on conceptual, or quite detailed, down to specific controls and functions. • Design patterns are reusable solutions to particular problems. In security, an example is IaaS log management. As with reference architectures, they can be more or less abstract or specific, even down to common implementation patterns on particular cloud platforms.

STANDARDS-BASED APPROACHES

• Few standards exist exclusively for cloud computing • ISO 27001 looks to certify that the ISMS can address relevant risks and elements that is appropriate based on risks • ISO 27002 is the framework for best practice • SOC I, II, III Service Organization Control defines a comprehensive approach to auditing and assesses the provider's controls and their effectiveness • NIST 800-53: Goal is ensure that appropriate security requirements and security controls are applied to all US Federal government information and information systems • Common Criteria • FIPS 140 addresses uses of encryption and cryptography • PCI-DSS, HIPPA and other regulations

IAM Federation Model

• Free-form: internal identity providers/sources (often directory servers) connect directly to cloud providers. • Hub and spoke: internal identity providers/sources communicate with a central broker or repository that then serves as the identity provider for federation to cloud providers. Directly federating internal directory servers in the free-form model raises a few issues: • The directory needs Internet access. This can be a problem, depending on existing topography, or it may violate security policies. • It may require users to VPN back to the corporate network before accessing cloud services. • Depending on the existing directory server, and especially if you have multiple directoryservers in different organizational silos, federating to an external provider may be complex and technically difficult.

Provisioning and Configuration

Rapid provisioning: Automatically deploying cloud systems based on the requested service/resources/capabilities. Resource changing: Adjusting configuration/resource assignment for repairs, upgrades and joining new nodes into the cloud. Monitoring and Reporting: Discovering and monitoring virtual resources, monitoring cloud operations and events and generating performance reports. Metering: Providing a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). SLA management: Encompassing the SLA contract definition (basic schema with the QoS parameters), SLA monitoring and SLA enforcement according to defined policies

Information gathering

Refers to the process of identifying, collecting, documenting, structuring, and communicating information from various sources in order to enable educated and swift decision making to occur.

Australian Privacy Act 1988

Regulates the handling of personal information about individuals. This includes the collection, use, storage, and disclosure of personal information, and access to and correction of that information.

Key Storage

Remote Key Mgmt. Service (KMS) - Keys managed by customer on-premises. Client Side Key Management - Customer in full control of keys, decrypted and encrpyted keyprocessing is done on customer's side. ■■ Internally managed: In this method, the keys are stored on the virtual machine or application component that is also acting as the encryption engine. This type of key management is typically used in storage-level encryption, internal database encryption, or backup application encryption. This approach can be helpful for mitigating against the risks associated with lost media. ■■ Externally managed: In this method, keys are maintained separate from the encryption engine and data. They can be on the same cloud platform, internally within the organization, or on a different cloud. The actual storage can be a separate instance (hardened especially for this specific task) or on a hardware security module (HSM). When implementing external key storage, consider how the key management system is integrated with the encryption engine and how the entire lifecycle of key creation through retirement is managed. ■■ Managed by a third party: This is when a trusted third party provides key escrow services. Key management providers use specifically developed secure infrastructure and integration services for key management. You must evaluate any thirdparty key storage services provider that may be contracted by the organization to ensure that the risks of allowing a third party to hold encryption keys is well understood and documented.

Service Organization Controls 1 (SOC 1)

Reports on Controls at Service organizations relevant to user entities' Internal Control over financial reporting. Type I =Point in Time Type II = Period in time, controls tested.

Service Organization Controls 2 (SOC 2)

Reports on Controls at a Service Organization Relevant to Security, Availability, Processing Integrity, Confidentiality and Privacy. Type I =Point in Time Type II = Period in time, controls tested. Most beneficial to customer but closely guarded. deals with CIA. Please note that the vendor would prefer sharing audit and performance data logs over giving out a SOC.

Storage (Object Based)

Stores data as objects in a volume, with labels and metadata.

Storage (CDN)

Stores data in caches of copied content near locations of high demand.

Storage (Database)

Stores data in fields, in a relational motif.

Information rights management (IRM)

Subset of DRM IRM common capabilities: * Persistent protection * Dynamic policy control, * Automatic expiration * Continuous audit trail * Support for existing authentication infrastructure Information rights management (IRM) is a subset of digital rights management (DRM), technologies that protect sensitive information from unauthorized access. It is sometimes referred to as E-DRM or Enterprise Digital Rights Management. Functionality such as preventing screen shots, disallowing the copying of data from the secure document to an insecure environment and guarding the information from programmatic attack, are key elements of an effective IRM solution. Challenges: IRM requires that all users with data access have matching encryption keys. This requirement means strong identity infrastructure is a must when implementing IRM, and the identity infrastructure should expand to customers, partners, and any other organizations with which data is shared.

PaaS Security

System and Resource Isolation-PaaS tenants should not have shell access to the servers running their instances (even when virtualized). The rationale behind this is to limit the chance and likelihood of configuration or system changes affecting multiple tenants. Where possible, administration facilities should be restricted to siloed containers to reduce this risk. User Access Management- Includes intelligence, administration, authentication and authorization. User Level Permissions-Each instance of a service should have its own notion of user-level entitlements (permissions). If the instances share common policies, appropriate countermeasures and controls should be enabled by the cloud security professional to reduce authorization creep or the inheritance of permissions over time.

Software container

Systems always include three key components: • The execution environment (the container). • An orchestration and scheduling controller (which can be a collection of multiple tools). • A repository for the container images or code to execute. • Together, these are the place to run things, the things to run, and the management system to tie them together. -------------------------------------- Regardless of the technology platform, container security includes: • Assuring the security of the underlying physical infrastructure (compute, network, storage). This is no different than any other form of virtualization, but it now extends into the underlying operating system where the container's execution environment runs. • Assuring the security of the management plane, which in this case are the orchestrator and the scheduler. • Properly securing the image repository. The image repository should be in a secure location with appropriate access controls configured. This is both to prevent loss or unapproved modification of container images and definition files, as well as to forestall leaks of sensitive data through unapproved access to the files. Containers run so easily that it's also important that images are only able to deploy in the right security context. • Building security into the tasks/code running inside the container. It's still possible to run vulnerable software inside a container and, in some cases this could expose the shared operating system or data from other containers. For example, it is possible to configure some containers to allow not merely access to the container's data on the file system but also root file system access. Allowing too much network access is also a possibility. These are all specific to the particular container platform and thus require securely configuring both the container environment and the images/container configurations themselves.

Cloud Taxonomy

Taxonomy is the science of categorization, or classification, of things based on a predefined system (cloud computing) . Level 1: Role, which indicates a set of obligations and behaviors as conceptualized by the associated actors in the context of cloud computing. Level 2: Activity, which entails the general behaviors or tasks associated to a specific role. Level 3: Component, which refer to the specific processes, actions, or tasks that must be performed to meet the objective of a specific activity. Level 4: Sub-component, which present a modular part of a component.

Datacenter Levels

Temp - 60-75 degrees, anything higher than 85 can damage equipment. Humidity - 40-60% or risk static discharge.

Service Organization Controls 3 (SOC 3)

Used for marketing, less detailed and vendors prefer to give this one out. Attestation by auditor aka seal of approval.

Static Application Security Testing (SAST)

A set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities. Examples: Source code review, whitebox testing, and highly skilled $$$ outside consultants.

Consensus Assessments Initiative Questionnaire (CAIQ)

A standard template for cloud providers to document their security and compliance controls.

ABAC over RBAC for cloud

ABAC allows more granular and context aware decisions by incorporating multiple attributes, such as role, location, authentication method, and more. • Cloud platforms tend to have greater support for the Attribute-Based Access Control (ABAC) model for IAM, which offers greater flexibility and security than the Role-Based Access Control (RBAC) model. RBAC is the traditional model for enforcing authorizations and relies on what is often a single attribute (a defined role). ABAC allows more granular and context aware decisions by incorporating multiple attributes, such as role, location, authentication method, and more. • ABAC is the preferred model for cloud-based access management. • When using federation, the cloud user is responsible for mapping attributes, including roles and groups, to the cloud provider and ensuring that these are properly communicated during authentication. • ABAC is the preferred model for cloud-based access management. real world example (attributes leading to backend policies in the cloud): The group and the use of MFA are attributes of the user's identity. That entitlement is written as a policy that is loaded into the cloud provider's system for enforcement. • Implement appropriate role-based access controls and strong authentication for all container and repository management. advanced implementation of a rule-BAC -use policies that include multiple attributes for rules -used by many software defined networking app's (CloudGenix)

kernel-based virtual machine (KVM)

Access to hosts should be done by secure kernel-based virtual machine (KVM); for added security, access to KVM devices should require a checkout process. A secure KVM prevents data loss from the server to the connected computer. It also prevents unsecure emanations. Two-factor authentication should be considered for remote console access. All access should be logged and routine audits conducted. A secure KVM meets the following design criteria: ■ Isolated data channels: Located in each KVM port, these make it impossible for data to be transferred between connected computers through the KVM. ■ Tamper-warning labels on each side of the KVM: These provide clear visual evidence if the enclosure has been compromised. ■ Housing intrusion detection: This causes the KVM to become inoperable and the LEDs to flash repeatedly if the housing has been opened. ■ Fixed firmware: It cannot be reprogrammed, preventing attempts to alter the logic of the KVM. ■ Tamper-proof circuit board: It's soldered to prevent component removal or alteration. ■ Safe buffer design: It does not incorporate a memory buffer, and the keyboard buffer is automatically cleared after data transmission, preventing transfer of key- strokes or other data when switching between computers. ■ Selective universal serial bus (USB) access: It only recognizes human interface device USB devices (such as keyboards and mice) to prevent inadvertent and insecure data transfer. ■ Push-button control: It requires physical access to KVM when switching between connected computers.

Web Application Firewall (WAF)

An appliance, server plugin, or filter that applies a set of rules to an HTTP conversation. Generally, these rules cover common attacks such as cross-site scripting (XSS) and SQL injection.

Data Anonymization

Anonymization is the process of removing the indirect identifiers to prevent data analysis tools or other intelligent mechanisms from collating or pulling data from multiple sources to identify individual or sensitive information. The process of anonymization is similar to masking and includes identifying the relevant information to anonymize and choosing a relevant method for obscuring the data.

iSCIS

Authentication methods supported with iSCSI: 1. Kerberos 2. Secure Remote Password (SRP) 3. Simple Public-Key Mechanism (SPKM1/2) 4. Challenge Handshake Authentication Protocol (CHAP)

Stored Communication Act

Enacted in the United States in 1986 as part of the Electronic Communications Privacy Act. It provides privacy protections for certain electronic communication and computing services from unauthorized access or interception.

Big data

Big data includes a collection of technologies for working with extremely large datasets that traditional data-processing tools are unable to manage. It's not any single technology but rather refers commonly to distributed collection, storage, and data-processing frameworks. The "3 Vs" are commonly accepted as the core definition of big data, although there are many other interpretations. • High volume: a large size of data, in terms of number of records or attributes. • High velocity: fast generation and processing of data, i.e., real-time or stream data. • High variety: structured, semi-structured, or unstructured data. There are three common components of big data, regardless of the specific toolset used: • Distributed data collection: Mechanisms to ingest large volumes of data, often of a streaming nature. This could be as "lightweight" as web-click streaming analytics and as complex as highly distributed scientific imaging or sensor data. Not all big data relies on distributed or streaming data collection, but it is a core big data technology. • Distributed storage: The ability to store the large data sets in distributed file systems (such as Google File System, Hadoop Distributed File System, etc.) or databases (often NoSQL), which is often required due to the limitations of non-distributed storage technologies. • Distributed processing: Tools capable of distributing processing jobs (such as map reduce, spark, etc.) for the effective analysis of data sets so massive and rapidly changing that single- origin processing can't effectively handle them. Other stuff to worry about: - Security and Privacy considerations - Data collection - Key mgmt - Security capabilities -IAM -PaaS

CM Board (CMB) - shellys team

Includes: IT dept., Security Office and Mgmt

SaaS Security

Data Segregation-The segregation must be ensured not only at the physical level but also at the application level. The service should be intelligent enough to segregate the data from different users. A malicious user can use application vulnerabilities to hand-craft parameters that bypass security checks and access sensitive data of other tenants. Data Access and Policies-When allowing and reviewing access to customer data, the key aspect to structuring a measurable and scalable approach begins with the correct identification, customization, implementation, and repeated assessments of the security policies for accessing data. The challenge from a CSP perspective is to offer a solution and service that is flexible enough to incorporate the specific organizational policies put forward by the organization, while also being positioned to provide a boundary and segregation among the multiple organizations and customers within a single cloud environment. Web Application Security-The fundamental difference with cloud-based services versus traditional web applications is their footprint and the attack surface they will present. In the same way that web application security assessments and code reviews are performed on applications prior to release, this becomes even more crucial when dealing with cloud services. The failure to carry out web application security assessments and code reviews may result in unauthorized access, corruption, or other integrity issues affecting the data, along with a loss of availability.

Network Security and Perimeter

Data and systems most important: ■■ Physical environment security ensures that access to the cloud service is adequately distributed, monitored, and protected by underlying physical resources within which the service is built. ■■ Logical network security controls consist of link, protocol, and application layer services.

Multi-tenancy

Data center networks that are logically divided into smaller, isolated networks. They share the physical networking gear but operate on their own network without visibility into the other logical networks.

Audit Planning

Define Audit objective > Define Scope > Conduct Audit > Refine Process

Denial of Service (DOS)

Denying services could be caused by: Hackers, construction equipment, and even things like squirrels.

Risk appetite

Determined by senior management within an organization.

CSA Domains

Domain 01- Cloud Computing Concepts and Architectures Domain 02- Governance and Enterprise Risk Management Domain 03- Legal Issues, Contracts and Electronic Discovery Domain 04- Compliance and Audit Management Domain 05- Information Governance Domain 06- Management Plane and Business Continuity Domain 07- Infrastructure Security Domain 08- Virtualization and Containers Domain 09- IncidInfrastructure as code (IaC) is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.ent Response Domain 10- Application Security Domain 11- Data Security and Encryption Domain 12- Identity, Access, and Entitlement Management Domain 13- Security as a Service Domain 14- Related Technologies Definition aligns with: NIST 800-145

Identity providers

Don't need to be located only on-premises; many cloud providers now support cloud-based directory servers that support federation internally and with other cloud services. For example, more complex architectures can synchronize or federate a portion of an organization's identities for an internal directory through an identity broker and then to a cloud-hosted directory, which then serves as an identity provider for other federated connections

Infrastructure as Code

Due to the virtual and software-defined nature of cloud it is often possible to define entire environments using templates that are translated by tools (either the provider's or third party) into API calls that automatically build the environment. A basic example is building a server configuration from a template. More complex implementations can build entire cloud application stacks, down to the network configuration and identity management.

Firewalls

Host-Based Software Firewall- designed to be used to protect the host directly and the VMs running on the hosts indirectly. This approach may work well for a small network with few hosts and VMs configured to run in a private cloud, but it is not as effective Hardware-based Firewall-effective for a large enterprise network with hundreds of hosts and thousands of VMs running in a hybrid cloud. Cloud-based Firewall - enterprise-grade protection

STORAGE ARCHITECTURES:

IaaS • Volume storage (block storage) Includes volumes/data stores attached to IaaS instances, usually a virtual hard drive. Should provide redundancy • Object storage: Example: Dropbox. Used for write-once, read many; not suitable for applications like databases • Independent of virtual machine • Because of varying laws and regulations, customers should always know where their physical data is stored and is stored in compliance with their needs PaaS • Utilizes the following data storage types: • Structured: Highly organized, such that inclusion in a relational database is seamless and readily searchable • Unstructured: Information that doesn't reside in a traditional row-column database—text, multimedia content, email, etc. SaaS • Information Storage and Management: Data is entered into the system via the web interface and stored with the SaaS application (often a backend database) • Content/file storage is stored within the application

REST [REST accesses data ]

In addition to using HTTP for simplicity, REST offers a number of other benefits over SOAP: REST allows a greater variety of data formats, whereas SOAP only allows XML. Coupled with JSON (which typically works better with data and offers faster parsing), REST is generally considered easier to work with. Thanks to JSON, REST offers better support for browser clients. REST provides superior performance, particularly through caching for information that's not altered and not dynamic. It is the protocol used most often for major services such as Yahoo, Ebay, Amazon, and even Google. REST is generally faster and uses less bandwidth. It's also easier to integrate with existing websites with no need to refactor site infrastructure. This enables developers to work faster rather than spend time rewriting a site from scratch. Instead, they can simply add additional functionality.

digital forensics

Includes the following phases: ■ Collection: Identifying, labeling, recording, and acquiring data from the possible sources of relevant data, while following procedures that preserve the integrity of the data (collecting from GuestOS is easier compared to HostOS since it isn't dynamic, hosts migrate and dynamic storage may before impact collection like for HostOS.) ■ Examination: Forensically processing collected data using a combination of automated and manual methods, and assessing and extracting data of particular interest, while preserving the integrity of the data ■ Analysis: Analyzing the results of the examination, using legally justifiable meth- ods and techniques, to derive useful information that addresses the questions that were the impetus for performing the collection and examination ■ Reporting: Reporting the results of the analysis, which may include describing the actions used, explaining how tools and procedures were selected, determining what other actions need to be performed (such as forensic examination of additional data sources, securing of identified vulnerabilities, improvement of existing security controls), and providing recommendations for improvement to policies, procedures, tools, and other aspects of the forensic process.

DRM

Includes: Dynamic policy control, auto expiration, and persistency. Support based licensing, local law enforcement, media checks..

Cloud Logical Model

Infrastructure: This, as name suggests, is the physical infrastructure on which the foundation is built. It consists of core components like compute, network and storage. Cloud Metastructure: This layer provides the interface between infrastructure and the other layers. It consists of protocols and technologies that helps in management and configuration. Applistructure: This layer is all the applications who have been migrated to cloud or applications that built on PaaS. It consists of applications and services like messaging, AI, SNS(in case of AWS). Infostructure: This is the information layer which consist of data. It could be databases, files, content PII etc. Software Defined Infrastructure: Create infrastructure templates, use APIs to orchestrate.

Free-form

Internal Identitiy providers/sources connect directly to cloud service provider.

NIST Cloud Technology Roadmap

Interoperability Portability Availability Security Privacy Resiliency Performance Governance SLAs Auditability Regulatory Compliance

Internet of Things

Key Security Issues: • Secure data collection and sanitization. • Device registration, authentication, and authorization. One common issue encountered today is use of stored credentials to make direct API calls to the back-end cloud provider. There are known cases of attackers decompiling applications or device software and then using those credentials for malicious purposes. • API security for connections from devices back to the cloud infrastructure. Aside from the stored credentials issue just mentioned, the APIs themselves could be decoded and used for attacks on the cloud infrastructure. • Encrypted communications. Many current devices use weak, outdated, or non-existent encryption, which places data and the devices at risk. • Ability to patch and update devices so they don't become a point of compromise. Currently, it is common for devices to be shipped as-is and never receive security updates for operating systems or applications. This has already caused multiple significant and highly publicized security incidents, such as massive botnet attacks based on compromised IoT devices.

Network Isolation

Key to virtual network security is isolation. Every host has a management network through which it communicates with other hosts and management systems. In a virtual infrastructure, the management network should be: - Isolated physically and virtually (vlan). - Connect all hosts, clients, and management systems to a separate physical network to secure the traffic. - Create isolated virtual switches for your host management network and never mix virtual-switch traffic with normal VM network traffic. In addition to isolation, there are other virtual network security best practices to keep in mind. ■ Note that the network that is used to move live VMs from one host to another does so in clear text. That means it may be possible to "sniff" the data or perform a man-in-the-middle attack when a live migration occurs. ■ When dealing with internal and external networks, always create a separate isolated virtual switch with its own physical network interface cards and never mix internal and external traffic on a virtual switch. ■ Lock down access to your virtual switches so that an attacker cannot move VMs from one network to another and so that VMs do not straddle an internal and external network. ■ For a better virtual network security strategy, use security applications that are designed specifically for virtual infrastructure and integrate them directly into the virtual networking layer. This includes network intrusion detection and preven- tion systems, monitoring and reporting systems, and virtual firewalls that are designed to secure virtual switches and isolate VMs. You can integrate physical and virtual network security to provide complete data center protection. ■ If you use network-based storage such as iSCSI or Network File System (NFS), use proper authentication. For iSCSI, bidirectional CHAP authentication is best. Be sure to physically isolate storage network traffic because the traffic is often sent as clear text. Anyone with access to the same network can listen and reconstruct files, alter traffic, and possibly corrupt the network.

Oversubscription

Occurs when more users are connected to a system than can be fully supported at the same time.

Storage (File based)

Manages data in a hierarchy of files.

Trade secrets

Material unique to an organization. Recipes etc.

network virtualization (VLAN or SDN)

Networks that need to be segregted: Management network used by management plane: management network for management and API traffic. Service network used by all users: communications between virtual machines and the Internet. This builds the network resource pool for the cloud users. Storage network used for backend devices: connect virtual storage to virtual machines.

Operational Modes

Normal - Systems are in constant maintenance mode. Maintenance Mode (data stores/hosts)- Remove all active production instances, prevent new logins, ensure logging continues. Devices are in constant maintenance mode. Maintenance mode is utilized when updating or configuring different components of the cloud environment. More info inside SLA.

hypervisor level - type 2

OS, more vulnerable and preferred by hackers. • Configure hypervisors to isolate virtual machines from each other.

Data Discovery

Process of identifying info according to specific traits or categories: Content based, label based, metadata based. Trends driving this: -Big data -real time analytics -agile analytics and agile business intelligence Issues: -Poor quality for data -Dashboards -Hidden costs Challenges in data discovery: -Identifying where you data is. -Access the data. -Performing preservation and maintenance Two views for solutions: Customer - controller -->must comply with laws. Service Provider - processor -->must demonstrated they applied the rules and security measures needed for PII protection on behalf of the controller.

Immutable infrastructure

Produce master images for virtual machines, containers, and infrastructure stacks very quickly and reliably. This enables automated deployments and immutable infrastructure. Security can extend these benefits by disabling remote logins to immutable servers/containers, adding file integrity monitoring, and integrating immutable techniques into incident recovery plans.

Security as a Service (SecaaS)

Providers offer security capabilities as a cloud service. This includes dedicated SecaaS providers, as well as packaged security features from general cloud-computing providers. Security as a Service encompasses a very wide range of possible technologies, but they must meet the following criteria: • SecaaS includes security products or services that are delivered as a cloud service. • To be considered SecaaS, the services must still meet the essential NIST characteristics for cloud computing, as defined in Domain 1. Benefits: • SaaS level would mean reduced capital expenses, agility, redundancy, high availability, and resiliency. • Staffing and expertise • Intelligence sharing: SecaaS providers protect multiple clients simultaneously and have the opportunity to share data intelligence and data across them. • Deployment flexibility • Insulation of clients: SecaaS can intercept attacks before they hit the organization directly. For example, spam filtering and cloud-based Web Application Firewalls are positioned between the attackers and the organization. They can absorb certain attacks before they ever reach the customer's assets. • Scaling and cost. Top concerns: • Lack of visibility. Since services operate at a remove from the customer, they often provide less visibility or data compared to running one's own operation. • Regulation Differences • Handling of regulated data. • Data leakage. • Changing Providers • Migration to SecaaS Service Offerings: • IAM • CASB • Web security gateway • Email Security • Security assessment • WAF • IPS/IDS • SIEM • Encryption and key mgmt. • business continuity/DR • Security Mgmt. • DdoS protection. Recommendations • Before engaging a SecaaS provider, be sure to understand any security-specific requirements for data-handling (and availability), investigative, and compliance support. • Pay particular attention to handling of regulated data, like PII. • Understand your data retention needs and select a provider that can support data feeds that don't create a lock-in situation. • Ensure that the SecaaS service is compatible with your current and future plans, such as its supported cloud (and on-premises) platforms, the workstation and mobile operating systems it accommodates, and so on

Contract

Roles and responsibilities should be included here and never the SLA. Joint Operating Agreement - provide nearby relocation sites so that disruption limited to an organization's facility can be addressed at a different facility (cost savings). Key adds: ■ Performance measurement ■ SLAs ■ Availability and associated downtime ■ Expected performance and minimum levels of performance ■ Incident response ■ Resolution timeframes ■ Maximum and minimum period for tolerable disruption ■ Issue resolution ■ Communication of incidents ■ Investigations ■ Capturing of evidence ■ Forensic and e-discovery processes ■ Civil and state investigations ■ Tort law and copyright ■ Control and compliance frameworks ■ ISO 27001/2 ■ ISO 27017 ■ COBIT ■ PCI DSS ■ HIPAA ■ GLBA ■ PII ■ Data protection ■ Safe Harbor ■ U.S. Patriot Act ■ BCDR Termination: Reaffirm Contractual Obligations. The organization should alert the cloud provider about any relevant contractual requirements that must be observed upon termination, such as non-disclosure of certain terms of the agreement and sanitization of organizational data from storage media. Eliminate Physical and Electronic Access Rights. If any accounts and access rights to an organization's computational resources were assigned to the cloud provider as part of the service agreement, they should be revoked in a timely manner by the organization . Similarly, physical access rights of security tokens and badges issued to the cloud provider also need to be revoked, and any personal tokens and badges used for access need to be recovered. Recover Organizational Resources and Data. The organization should ensure that any resources of the organization made available to the cloud provider under the terms of the service agreement, such as software, equipment, documentation, are returned or recovered in a usable form, as well as any data, programs, scripts, etc. owned by the organization and held by the cloud provider. If the terms of service require the cloud provider to purge data, programs, backup copies, and other cloud consumer content from its environment, evidence such as system reports or logs should be obtained and verified

Runtime Application Self-Protection

Runtime application self-protection (RASP) is generally considered to focus on applications that possess self-protection capabilities built into their runtime environments, which have full insight into application logic, configuration, and data and event flows. RASP prevents attacks by self-protecting or reconfiguring automatically without human intervention in response to certain conditions (threats, faults, and so on).

Examples of Cloud Services

SaaS services: o Email and Office Productivity: Applications for email, word processing, spreadsheets, presentations, etc. o Billing: Application services to manage customer billing based on usage and subscriptions to products and services. o Customer Relationship Management (CRM): CRM applications that range from call center applications to sales force automation. o Collaboration: Tools that allow users to collaborate in workgroups, within enterprises, and across enterprises. o Content Management: Services for managing the production of and access to content for web-based applications. o Document Management: Applications for managing documents, enforcing document production workflows, and providing workspaces for groups or enterprises to find and access documents. o Financials: Applications for managing financial processes ranging from expense processing and invoicing to tax management. o Human Resources: Software for managing human resources functions within companies. o Sales: Applications that are specifically designed for sales functions such as pricing, commission tracking, etc. o Social Networks: Social software that establishes and maintains a connection among users that are tied in one or more specific types of interdependency. o Enterprise Resource Planning (ERP): Integrated computer-based system used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. PaaS Services: o Business Intelligence: Platforms for the creation of applications such as dashboards, reporting systems, and data analysis [Analytics, cognitive]. o Database: Services offering scalable relational database solutions or scalable non-SQL datastores. o Development and Testing: Platforms for the development and testing cycles of application development, which expand and contract as needed. o Integration: Development platforms for building integration applications in the cloud and within the enterprise. o Application Deployment: Platforms suited for general purpose application development. These services provide databases, web application runtime environments, etc. o Big data o Cache IaaS Services: o Backup and Recovery: Services for backup and recovery of file systems and raw data stores on servers and desktop systems. o Compute: Server resources for running cloud-based systems that can be dynamically provisioned and configured as needed. NIST SP 500-292 NIST Cloud Computing Reference Architecture 25 o Content Delivery Networks (CDNs): CDNs store content and files to improve the performance and cost of delivering content for web-based systems. o Services Management: Services that manage cloud infrastructure platforms. These tools often provide features that cloud providers do not provide or specialize in managing certain application technologies. o Storage: Massively scalable storage capacity that can be used for applications, backups, archival, and file storage. o Cloud Broker Cloud Service: o Cloud Abstraction o Cloud Integration o Cloud Management o Configuration Automation o Data

KVM Switch

Seal exterior case, welded chipsets, push button selectors.... Note: Doesn't record keystrokes.

virtual appliance

Since physical appliances can't be inserted (except by the cloud provider) they must be replaced by virtual appliances if still needed, and if the cloud network supports the necessary routing. This brings the same concerns as inserting virtual appliances for network monitoring: • Virtual appliances thus become bottlenecks, since they cannot fail open, and must intercept all traffic. • Virtual appliances may take significant resources and increase costs to meet network performance requirements. • When used, virtual appliances should support auto-scaling to match the elasticity of the resources they protect. Depending on the product, this could cause issues if the vendor does not support elastic licensing compatible with auto-scaling. • Virtual appliances should also be aware of operating in the cloud, as well as the ability of instances to move between different geographic and availability zones. The velocity of change in cloud networks is higher than that of physical networks and tools need to be designed to handle this important difference. • Cloud application components tend to be more distributed to improve resiliency and, due to auto-scaling, virtual servers may have shorter lives and be more prolific. This changes how security policies need to be designed.

STRIDE

Spoofing, tampering, info disclosure, elevation of privileges.

Hosting

Standalone host -System residing on its own server (isolated, secured, dedicated hosting of individual cloud resources). Shared host - Where multiple websites or similar are located on a single machine (isolated, secured, dedicated hosting of individual cloud resources..this would have to be combined with a configuration with multi-tenant hosting capabilities). Virtual Host - Where a single system is carved up into multiple systems. --------------------------------------------- Clustered host - is logically and physically connected to other hosts within a manage- ment framework. This is done to allow central management of resources for the collec- tion of hosts, applications, and VMs running on a member of the cluster to fail over, or move, between host members as needed for continued operation of those resources, with a focus on minimizing the downtime that host failures can cause. Resource sharing - Within a host cluster, resources are allocated and managed as if they were pooled or jointly available to all members of the cluster. The use of resource-sharing concepts such as reservations limits and shares may be used to further refine and orchestrate the allocation of resources according to requirements that the cluster administrator imposes. Notes: All virtualization vendors use distributed resource scheduling (DRS) in one form or another to allow for a cluster of hosts to do the following: 13 ■ Provide highly available resources to your workloads ■ Balance workloads for optimal performance ■ Scale and manage computing resources without service disruption

Extensible Access Control Markup (XACML)

Standard for defining attribute based access controls/authorizations.

Doctrine of the proper law

States how jurisdictional disputes should be settled.

Qualitative assessments

Typically employ a set of methods, principles, or rules for assessing risk based on non-numerical categories or levels (e.g., very low, low, moderate, high, very high).

Quantitative assessments

Typically employ a set of methods, principles, or rules for assessing risk based on the use of numbers. This type of assessment most effectively supports cost-benefit analyses of alternative risk responses or courses of action.

Managed Service Provider

Unlike in cloud service provider, the customer dictates the technology and operating procedures. Other features: Some form of network operations center (NOC) service ■■ Some form of help desk service ■■ Remote monitoring and management of all or most of the objects for the customer ■■ Proactive maintenance of the objects under management for the customer ■■ Delivery of these solutions with some form of predictable billing model, where the customer knows with great accuracy what the regular IT management expense will be

Immutable Workloads

build once (be it a virtual machine image, container image, or something else), run one or many instances of, and never change again. The deployment model is to terminate the instance/container and start over from step one: build a new image and throw old instances away. • You no longer patch running systems or worry about dependencies, broken patch processes, etc. You replace them with a new gold master. • You can, and should, disable remote logins to running workloads (if logins are even an option). This is an operational requirement to prevent changes that aren't consistent across the stack, which also has significant security benefits. • It is much faster to roll out updated versions, since applications must be designed to handle individual nodes going down (remember, this is fundamental to any auto-scaling). You are less constrained by the complexity and fragility of patching a running system. Even if something breaks, you just replace it.

Software Defined Perimeter

combines device and user authentication to dynamically provision network access to resources and enhance security. SDP includes three components: • An SDP client on the connecting asset (e.g. a laptop). • The SDP controller for authenticating and authorizing SDP clients and configuring the connections to SDP gateways. • The SDP gateway for terminating SDP client network traffic and enforcing policies in communication with the SDP controller.

e-discovery

refers to any process in which electronic data is sought, located, secured, and searched with the intent of using it as evidence in a civil or criminal legal case. e-discovery can be carried out online and offline (for static systems or within particular network segments). In the case of cloud computing, almost all e-discovery cases are done in online environments with resources remaining online. There are various ways to conduct e-discovery investigations in cloud environments. A few examples include the following: ■ Software as a service (SaaS)-based e-discovery: To some, "e-discovery in the cloud" means using the cloud to deliver tools used for e-discovery. These SaaS packages typically cover one of several e-discovery tasks, such as collection, preser- vation, and review. ■ Hosted e-discovery (provider): e-discovery in the cloud can also mean hiring a hosted services provider to conduct e-discovery on data stored in the cloud. Typi- cally, the customer stores data in the cloud with the understanding and mecha- nisms to support the cloud vendor doing the e-discovery. When the providers are not in a position to resource or provide the e-discovery, they may outsource to a credible or trusted provider. ■ Third-party e-discovery: When no prior notifications or arrangements with the CSP for an e-discovery review exist, typically an organization needs a third party or specialized resources operating on its behalf.

FISMA

requires federal agencies to adequately protect their information and information systems against unauthorized access, use, disclosure, disruption, modification, or destruction [

customer (data controller)

retains liability for any loss of data , this is not transferred to the vendor......

Storage Encryption

• IaaS Encryption uses Volume Storage Encryption and Object Storage Encryption • PaaS Encryption with Client/Application Encryption, Databased encryption and proxy-based encryption • SaaS Encryption is managed by the Cloud Service Provider by the applications and through Proxy encryption IaaS Encryption IaaS volumes can be encrypted using different methods, depending on your data. Volume storage encryption • Instance-managed encryption: The encryption engine runs within the instance, and the key is stored in the volume but protected by a passphrase or keypair. • Externally managed encryption: The encryption engine runs in the instance, but the keys are managed externally and issued to the instance on request. Object and file storage • Client-side encryption: When object storage is used as the back-end for an application (including mobile applications), encrypt the data using an encryption engine embedded in the application or client. • Server-side encryption: Data is encrypted on the server (cloud) side after being transferred in. The cloud provider has access to the key and runs the encryption engine. • Proxy encryption: In this model, you connect the volume to a special instance or appliance/ software, and then connect your instance to the encryption instance. The proxy handles all crypto operations and may keep keys either onboard or externally. PaaS Encryption PaaS encryption varies tremendously due to all the different PaaS platforms. • Application layer encryption: Data is encrypted in the PaaS application or the client accessing the platform. • Database encryption: Data is encrypted in the database using encryption that's built in and is supported by a database platform like Transparent Database Encryption (TDE) or at the field level. • Other: These are provider-managed layers in the application, such as the messaging queue. There are also IaaS options when that is used for underlying storage.

Storage - SaaS

• Information Storage and Management: Data is entered into the system via the web interface and stored with the SaaS application (often a backend database) • Content/file storage is stored within the application ■■ Information storage and management: Data is entered into the system via the web interface and stored within the SaaS application (usually a back-end database). This data storage utilizes databases, which in turn are installed on object or volume storage. ■■ Content and file storage: File-based content is stored within the application. Other types of storage that may be utilized include these: ■■ Ephemeral storage: This type of storage is relevant for IaaS instances and exists only as long as its instance is up. It is typically used for swap files and other temporary storage needs and is terminated with its instance. ■■ Content delivery network (CDN): Content is stored in object storage, which is then distributed to multiple geographically distributed nodes to improve Internet consumption speed. ■■ Raw storage: Raw device mapping (RDM) is an option in the VMware server virtualization environment that enables a storage logical unit number (LUN) to be directly connected to a virtual machine (VM) from the storage area network (SAN). In Microsoft's Hyper-V platform, this is accomplished using pass-through disks. ■■ Long-term storage: Some vendors offer a cloud storage service tailored to the needs of data archiving. Typical data archiving needs include search, guaranteed immutability, and data lifecycle management. One example of this is the HP Autonomy Digital Safe archiving service, which uses an on-premises appliance that connects to customers' data stores via application programming interfaces (APIs) and allows user to search. Digital Safe provides read-only, write once read many (WORM), legal hold, e-discovery, and all the features associated with enterprise archiving. Its appliance carries out data deduplication prior to transmission to the data repository.

VULNERABILITY DATABASES AND RESOURCES

• OWASP (Open Web Application Security Project) Top Ten • CVE (Common Vulnerabilities and Exposures) • CWE (Common Weakness Enumeration) • NVD (National Vulnerability Database) • US CERT (Computer Emergency Response Team) Vulnerability Database

SDLC (SOFTWARE DEVELOPMENT LIFECYCLE) FOR THE CLOUD

• Planning and Requirements analysis: All business requirements should be defined and risks should be identified • Defining: Clearly defines the requirements through a requirement specification document • Designing: Specifies hardware and system requirements and helps define overall architecture • Developing: Work is divided into modules and the actual coding starts • Testing: Code is tested against requirements: Unit testing, integration testing, system testing and user acceptance testing • Maintenance: Continuous monitoring and updates as needed

Privileged User Management

• Privileged identities should always use MFA. In terms of controlling risk, few things are more essential than privileged user management. The requirements mentioned above for strong authentication should be a strong consideration for any privileged user. In addition, account and session recoding should be implemented to drive up accountability and visibility for privileged users. In some cases, it will be beneficial for a privileged user to sign in through a separate tightly controlled system using higher levels of assurances for credential control, digital certificates, physically and logically separate access points, and/or jump hosts.

Secure Software Development Lifecycle and Cloud Computing (SSDLC)

• Secure Design and Development: From training and developing organization-wide standards to actually writing and testing code. • Secure Deployment: The security and testing activities when moving code from an isolated development environment into production. • Secure Operations: Securing and maintaining production applications, including external defenses such as Web Application Firewalls (WAF) and ongoing vulnerability assessments.

Identity Access Management (IAM) standards

• Security Assertion Markup Language (SAML) 2.0 is an OASIS standard for federated identity management that supports both authentication and authorization. It uses XML to make assertions between an identity provider and a relying party. Assertions can contain authentication statements, attribute statements, and authorization decision statements. SAML is very widely supported by both enterprise tools and cloud providers but can be complex to initially configure. • OAuth is an IETF standard for authorization that is very widely used for web services (including consumer services). OAuth is designed to work over HTTP and is currently on version 2.0, which is not compatible with version 1.0. To add a little confusion to the mix, OAuth 2.0 is more of a framework and less rigid than OAuth 1.0, which means implementations may not be compatible. It is most often used for delegating access control/authorizations between services. • OpenID is a standard for federated authentication that is very widely supported for web services. It is based on HTTP with URLs used to identify the identity provider and the user/ identity (e.g. identity.identityprovider.com). The current version is OpenID Connect 1.0 and it is very commonly seen in consumer services. • eXtensible Access Control Markup Language (XACML) is a standard for defining attribute-based access controls/authorizations. It is a policy language for defining access controls at a Policy Decision Point and then passing them to a Policy Enforcement Point. It can be used with both SAML and OAuth since it solves a different part of the problem—i.e. deciding what an entity is cloud computing. • Authoritative source: the "root" source of an identity, such as the directory server that manages employee identities. • Identity Provider: the source of the identity in federation. The identity provider isn't always the authoritative source, but can sometimes rely on the authoritative source, especially if it is a broker for the process. • Relying Party: the system that relies on an identity assertion from an identity provider.

Event driven security

• Use event-driven security, when available, to automate detection and remediation of security issues.

Business continuity management (BCM) vs Business continuity (BC)

■ BC is defined as the capability of the organization to continue delivery of prod- ucts or services at acceptable predefined levels following a disruptive incident. (Source: ISO 22301:2012) 22 ■ BCM is defined as a holistic management process that identifies potential threats to an organization and the impacts to business operations those threats, if real- ized, might cause. It provides a framework for building organizational resilience with the capability of an effective response that safeguards the interests of its key stakeholders, reputation, brand, and value-creating activities. The three main aspects of BC/DR in the cloud are: • Ensuring continuity and recovery within a given cloud provider. These are the tools and techniques to best architect your cloud deployment to keep things running if either what you deploy breaks, or a portion of the cloud provider breaks. • Preparing for and managing cloud provider outages. This extends from the more constrained problems that you can architect around within a provider to the wider outages that take down all or some of the provider in a way that exceeds the capabilities of inherent DR controls. • Considering options for portability, in case you need to migrate providers or platforms. This could be due to anything from desiring a different feature set to the complete loss of the provider if, for example, they go out of business or you have a legal dispute.

Governance challenges

■ Define audit requirements and extension of additional audit activities. ■ Verify that all regulatory and legal obligations will be satisfied as part of the non- disclosure agreement (NDA) or contract. ■ Establish reporting and communication lines both internal to the organization and for CSPs. ■ Ensure that where operational procedures and processes are changed due to use of cloud services, all documentation and evidence are updated accordingly. ■ Ensure that all business continuity, incident management and response, and disas- ter recovery plans (DRPs) are updated to reflect changes and interdependencies.

Cloud Security Alliance Star

■ Level 1, Self-Assessment: Requires the release and publication of due diligence self-assessment, against the CSA consensus assessment initiative (CAI) question- naire or CCM ■ Level 2, Attestation: Requires the release and publication of available results of an assessment carried out by an independent third party based on CSA CCM and ISO27001:2013 or AICPA SOC2 ■ Level 3, Ongoing Monitoring Certification: Requires the release and publica- tion of results related to security-properties monitoring based on the cloud trust protocol (CTP)

remote access

■ Tunneling via a VPN—IPSec or SSL 15 ■ Remote desktop protocol (RDP), which allows for desktop access to remote systems ■ Access via a secure terminal ■ Deployment of a DMZ There are several cloud environment access requirements. The cloud environment should provide each of the following: ■ Encrypted transmission of all communications between the remote user and the host ■ Secure login with complex passwords or certificate-based login ■ Two-factor authentication providing enhanced security ■ A log and audit of all connections

Data Security Key Elements

■■ DLP: For auditing and preventing unauthorized data exfiltration ■■ Encryption: For preventing unauthorized data viewing ■■ Obfuscation, anonymization, tokenization, and masking: Different alternatives for protecting data without encryption

Data in Motion Encryption

■■ Transport layer security (TLS): A protocol that ensures privacy between communicating applications and their users on the Internet. When a server and client communicate, TLS ensures that no third party may eavesdrop or tamper with a message. TLS is the successor to SSL. ■■ SSL: The standard security technology for establishing an encrypted link between a web server and a browser. This link ensures that all data passed between the web server and browsers remains private and integral. ■■ Virtual private network (VPN, such as IPSec gateway): A network that is constructed by using public wires—usually the Internet—to connect to a private network, such as a company's internal network. A number of systems enable you to create networks using the Internet as the medium for transporting data. ■■ Domain name system security extensions (DNSSEC) should be used to prevent domain name system (DNS) poisoning. DNSSEC is a suite of Internet Engineering Task Force (IETF) specifications for securing certain kinds of information provided by DNS as used on Internet protocol (IP) networks.

Security Devices [extra]

■■ Web application firewall (WAF) ■■ A WAF is a layer-7 firewall that can understand HTTP traffic. ■■ A cloud WAF can be extremely effective in the case of a DoS attack; in several cases, a cloud WAF was used to successfully thwart DoS attacks of 350 Gbps and 450 Gbps. ■■ Database activity monitoring (DAM) ■■ DAM is a layer-7 monitoring device that understands SQL commands. ■■ DAM can be agent-based (ADAM) or network-based (NDAM). ■■ A DAM can detect and stop malicious commands from executing on an SQL server. ■■ XML ■■ XML gateways transform the way services and sensitive data are exposed as APIs to developers, mobile users, and cloud users. ■■ XML gateways can be either hardware or software. ■■ XML gateways can implement security controls such as data loss prevention (DLP), antivirus, and antimalware services. ■■ Firewalls ■■ Firewalls can be distributed or configured across the SaaS, PaaS, and IaaS landscapes; these can be owned and operated by the provider or can be outsourced to a third party for ongoing management and maintenance. ■■ Firewalls in the cloud need to be installed as software components (such as host-based firewalls). ■■ API gateway ■■ An API gateway is a device that filters API traffic; it can be installed as a proxy or as a specific part of your application stack before data is processed. ■■ An API gateway can implement access control, rate limiting, logging, metrics, and security filtering.

Data at Rest Encryption

■■ Whole instance encryption: A method for encrypting all the data associated with the operation and use of a virtual machine, such as the data stored at rest on the volume, disk input/output (I/O), all snapshots created from the volume, as well as all data in transit moving between the virtual machine and the storage volume. ■■ Volume encryption: A method for encrypting a single volume on a drive. Parts of the hard drive are left unencrypted when using this method. (Full disk encryption should be used to encrypt the entire contents of the drive, if that is what is desired.) ■■ File or directory encryption: A method for encrypting a single file or directory on a drive.

Cloud Storage

The storage of data online in the cloud, wherein a company's data is stored in and accessible from multiple distributed and connected resources that comprise a cloud.

Enterprise Application

The term used to describe applications — or software — that a business would use to assist the organization in solving enterprise problems.

Cloud Administrator

This individual is typically responsible for the implementation, monitoring, and maintenance of the cloud within the organization or on behalf of an organization (acting as a third party).

Anything-as-a-Service

Anything-as-a-service, or "XaaS," refers to the growing diversity of services available over the Internet via cloud computing as opposed to being provided locally, or on premises.

Data Loss Prevention (DLP)

Audit and prevent unauthorized data exfiltration. Should include: policy enforcement, elasticity, and loss mitigation. Can also aid in inadvertent disclosure, data has to be properly categorized for this control to be effective. DLP + DRM = knockout combo. The following are some important considerations for cloud-based DLP: ■■ Data in the cloud tends to move and replicate: Whether it is between locations, data centers, backups, or back and forth into the organizations, the replication and movement can present a challenge to any DLP implementation. ■■ Administrative access for enterprise data in the cloud could be tricky: Make sure you understand how to perform discovery and classification within cloudbased storage. ■■ DLP technology can affect overall performance: Network or gateway DLP, which scans all traffic for predefined content, might have an effect on network performance. Client-based DLPs scan all workstation access to data, which can affect the workstation's operation. The overall impact must be considered during testing.

Identity and Access Management (IAM)

The security discipline that enables the right individuals to access the right resources at the right times for the right reasons

Enterprise Risk Management

The set of processes and structure to systematically manage all risks to the enterprise.

Desktop-as-a-service

A form of virtual desktop infrastructure (VDI) in which the VDI is outsourced and handled by a third party.

Service Level Agreement (SLA)

A formal agreement between two or more organizations: one that provides a service and the other the recipient of the service. It may be a legal contract with incentives and penalties.

Organizational Normative Framework (ONF)

A framework of so-called containers for all components of application security best practices cataloged and leveraged by the organization. Example: A framework of containers for all components of application security, best practices, cataloged and leveraged by the organization.

Business Impact Analysis (BIA)

An exercise that determines the impact of losing the support of any resource to an organization, establishes the escalation of that loss over time, identifies the minimum resources needed to recover, and prioritizes the recovery of processes and supporting systems. Cloud level: Cloud provider's suppliers, vendors, and utilities. The BIA could also leverage and existing cost benefit analysis the organization conducted when deciding on cloud migration.

Eucalyptus

An open source cloud computing and Infrastructure as a Service (IaaS) platform for enabling private clouds.

Cloud Computing Reseller

A company that purchases hosting services from a cloud server hosting or cloud computing provider and then re-sells them to its own customers.

Cloud Application Architect

Typically responsible for adapting, porting, or deploying an application to a target cloud environment.

Degaussing

Using strong magnets for scrambling data on magnetic media such as hard drives and tapes.

Apache CloudStack

An open source cloud computing and Infrastructure as a Service (IaaS) platform developed to help Infrastructure as a Service make creating, deploying, and managing cloud services easier by providing a complete "stack" of features and components for cloud environments.

Cloud Management

Software and technologies designed for operating and monitoring the applications, data, and services residing in the cloud. Cloud management tools help to ensure a company's cloud computing-based resources are working optimally and properly interacting with users and other services.

Application Virtualization

Software technology that encapsulates application software from the underlying operating system on which it is executed

Cloud Portability

The ability to move applications and its associated data between one cloud provider and another — or between public and private cloud environments.

Enterprise DRM

Integration plan designed by Digital Equipment Corp. to provide an operation platform for multi-vendor environment.

Database Activity Monitoring (DAM)

A database security technology for monitoring and analyzing database activity that operates independently of the database management system (DBMS) and does not rely on any form of native (DBMS-resident) auditing or native logs such as trace or transaction logs. Host based or network based.

Software as a Service (SaaS)

A distributed model where software applications are hosted by a vendor or cloud service provider and made available to customers over network resources. Customer is responsible for the data. Hosted App Mgmt (hosted AM) - Provider hosts commercially available software for customers and delivers it over the web. Software on demand - The CSP gives the customers network based access to a single copy of an application created specifically for SaaS distribution (typically the same network segment). Key Features: -Overall reduction in costs -Application and software licensing -Reducing support costs

Mobile Cloud Storage

A form of cloud storage that applies to storing an individual's mobile device data in the cloud and providing the individual with access to the data from anywhere.

Public Cloud Storage

A form of cloud storage where the enterprise and storage service provider are separate and the data is stored outside of the enterprise's data center.

Private Cloud Storage

A form of cloud storage where the enterprise data and cloud storage resources both reside within the enterprise's data center and behind the firewall.

Bit Splitting

Usually involves splitting up and storing encrypted information across different cloud storage services. The benefits of bit splitting follow: ■■ Data security is enhanced due to the use of stronger confidentiality mechanisms. ■■ Bit splitting between different geographies and jurisdictions may make it harder to gain access to the complete data set via a subpoena or other legal processes. ■■ It can be scalable, can be incorporated into secured cloud storage API technologies, and can reduce the risk of vendor lock-in. Bit splitting can utilize different methods, a large percentage of which are based on secret sharing cryptographic algorithms: ■■ Secret Sharing Made Short (SSMS): Uses a three-phase process—encryption of information; use of information dispersal algorithm (IDA), which is designed to efficiently split the data using erasure coding into fragments; and splitting the encryption key using the secret sharing algorithm. The different fragments of data and encryption keys are then signed and distributed to different cloud storage services. The user can reconstruct the original data by accessing only m (lower than n) arbitrarily chosen fragments of the data and encryption key. An adversary has to compromise (m) cloud storage services and recover both the encrypted information and the encryption key that is also split.4 ■■ All-or-Nothing-Transform with Reed-Solomon (AONT-RS): Integrates the AONT and erasure coding. This method first encrypts and transforms the information and the encryption key into blocks in a way that the information cannot be recovered without using all the blocks, and then it uses the IDA to split the blocks into m shares that are distributed to different cloud storage services (the same as in SSMS).5

Multi-factor Authentication

A method of computer access control which a user can pass by successfully presenting authentication factors from at least two of the three categories: knowledge factors, such as passwords. Combines two or more independent credentials: what the user knows, what the user has and what the user is. ■■ What they know (such as password) ■■ What they have (such as display token with random numbers displayed) ■■ What they are (such as biometrics) One-time passwords also fall under the banner of multifactor authentication. The use of one-time passwords is strongly encouraged during provisioning and communicating of first-login passwords to users. Step-up authentication is an additional factor or procedure that validates a user's identity, normally prompted by high-risk transactions or violations according to policy rules. Three methods are commonly used: ■■ Challenge questions ■■ Out-of-band authentication (a call or Short Message Service [SMS] text message to the end user) ■■ Dynamic knowledge-based authentication (questions unique to the end user)

Encryption

An overt secret writing technique that uses a bidirectional algorithm in which humanly readable information (referred to as plaintext) is converted into humanly unintelligible information (referred to as ciphertext). With a cloud environment should be use for long term storage, near term storage of virtualized images and secure sessions/VPN. Encryption mechanisms should be selected based on the information and data they protect, while taking into account requirements for access and general functions. The critical success factor for encryption is to enable secure and legitimate access to resources, while protecting and enforcing controls against unauthorized access. Typically, the following components are associated with encryption deployments: ■■ The data: This is the data object or objects that need to be encrypted. ■■ Encryption engine: This performs the encryption operation. ■■ Encryption keys: All encryption is based on keys. Safe-guarding the keys is a crucial activity, necessary for ensuring the ongoing integrity of the encryption implementation and its algorithms.

Cloud Security Alliance's Cloud Controls Matrix

A framework to enable cooperation between cloud consumers and cloud providers on demonstrating adequate risk management. An inventory of cloud service security controls that are arranged into "separate" security domains. • Designed to provide fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a provider • Provides a controls framework in 16 domains that are cross- walked to other industry-accepted security standards, regulations, and controls frameworks to reduce audit complexity • It provides mapping to the industry-accepted security standards such as ISO 27001/27002, COBIT, PCI-DSS

Encryption Key

A special mathematical code that allows encryption hardware/software to encode and then decipher an encrypted message.

Platform as a Service (PaaS)

A way for customers to rent hardware, operating systems, storage, and network capacity over the Internet from a cloud service provider. Make sure your company's developers do not leave any backdoors aka service hooks. Key features: -Support multiple languages -Multiple hosting environment -Flexibility -Allow choice and reduce lock-in -Ability to auto-scale

Masking

A weak form of confidentiality assurance that replaces the original information with asterisks or X's.

Management Plane

Controls the entire infrastructure, and parts of it will be exposed to customers independent of network location, it is a prime resource to protect. Managed through API or console.

Cloud Backup Solutions

Enable enterprises or individuals to store their data and computer files on the Internet using a storage service provider rather than storing the data locally on a physical disk, such as a hard drive or tape backup.

Cloud Services Broker (CSB)

Typically a third-party entity or company that looks to extend or enhance value to multiple customers of cloud-based services through relationships with multiple cloud service providers. Can handle additional tasks such as SSO, IAM and key escrow, what it can't do is DR/BCP. Please note they can assist with access control for customer but they don't directly contact any production data. • Consider CASB to monitor data flowing into SaaS. It may still be helpful for some PaaS and IaaS, but rely more on existing policies and data repository security for those types of large migrations. Can also use a DLP tool. Service Consumption - A Cloud Broker in the act of using a Cloud Service. Service Provision - A Cloud Broker in the act of providing a Cloud Service Service Intermediation - An intermediation broker provides a service that directly enhances a given service delivered to one or more service consumers, essentially adding value on top of a given service to enhance some specific capability. Service Aggregation - An aggregation brokerage service combines multiple services into one or more new services. It will ensure that data is modeled across all component services and integrated as well as ensuring the movement and security of data between the service consumer and multiple providers. Service Arbitrage - Cloud service arbitrage is similar to cloud service aggregation. The difference between them is that the services being aggregated aren‟t fixed. Indeed the goal of arbitrage is to provide flexibility and opportunistic choices for the service aggregator, e.g., providing multiple email services through one service provider or providing a credit-scoring service that checks multiple scoring agencies and selects the best score.

Federation

The technology of federation is much like that of Kerberos within an Active Directory domain: a user logs on once to a domain controller, is ultimately granted an access token, and uses that token to gain access to systems for which the user has authorization. The difference is that whereas Kerberos works well in a single domain, federated identities allow for the generation of tokens (authentication) in one domain and the consumption of these tokens (authorization) in another domain. Tip: • When connecting to external cloud providers, use federation, if possible, to extend existing identity management. Try to minimize silos of identities in cloud providers that are not tied to internal identities. Types: ■■ SAML-XML-based framework for communicating user authentication, entitlement, and attribute information. As its name suggests, SAML allows business entities to make assertions regarding the identity, attributes, and entitlements of a subject (an entity that is often a human user) to other entities, such as a partner company or another enterprise application. ■■ WS-Federation: According to the WS-Federation Version 1.2 OASIS standard, "this specification defines mechanisms to allow different security realms to federate, such that authorized access to resources managed in one realm can be provided to security principals whose identities are managed in other realms."18 ■■ OpenID Connect: According to the OpenID Connect FAQ, this is an interoperable authentication protocol based on the OAuth 2.0 family of specifications. According to OpenID, "Connect lets developers authenticate their users across websites and apps without having to own and manage password files. For the app builder, it provides a secure verifiable answer to the question: 'What is the identity of the person currently using the browser or native app that is connected to me?'"19 ■■ OAuth: OAuth is widely used for authorization services in web and mobile applications. According to RFC 6749, "The OAuth 2.0 authorization framework enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf." Subset: ■■ Single sign-on (SSO) systems allow a single user authentication process across multiple IT systems or even organizations. SSO is a subset of federated identity management, as it relates only to authentication and technical interoperability Note: SSO should not be confused with reduced sign-on (RSO). RSO generally operates through some form of credential synchronization. Implementation of an RSO solution introduces security issues not experienced by SSO because the nature of SSO eliminates usernames and other sensitive data from traversing the network. The foundation of federation relies on the existence of an identity provider; therefore, RSO has no place in a federated identity system.

Data Masking

A method of creating a structurally similar but inauthentic version of an organization's data that can be used for purposes such as software testing and user training. Supports: secure remote access, enforcing least privilege and testing data in sandboxed environments. Primary Methods: ■■ Static: In static masking, a new copy of the data is created with the masked values. Static masking is typically efficient when creating clean nonproduction environments. ■■ Dynamic: Dynamic masking, sometimes referred to as on-the-fly masking, adds a layer of masking between the application and the database. The masking layer is responsible for masking the information in the database on the fly when the presentation layer accesses it. This type of masking is efficient when protecting production environments. It can hide the full credit card number from customer service representatives, but the data remains available for processing. Common approaches to data masking include these: ■■ Random substitution: The value is replaced (or appended) with a random value. ■■ Algorithmic substitution: The value is replaced (or appended) with an algorithmgenerated value. (This typically allows for two-way substitution.) ■■ Shuffle: This shuffles different values from the data set. It is usually from the same column. ■■ Masking: This uses specific characters to hide certain parts of the data. It usually applies to credit card data formats: XXXX XXXX XX65 5432. ■■ Deletion: This simply uses a null value or deletes the data.

TCI Reference Architecture

A methodology and a set of tools that enables security professionals to leverage a common set of solutions that fulfill their common needs to be able to assess where their internal IT and their cloud providers are in terms of security capabilities and to plan a roadmap to meet the security needs of their business.

Application Programming Interfaces (APIs)

A set of routines, standards, protocols, and tools for building software applications to access a Web-based software application or Web tool. APIs can be broken into multiple formats, two of which follow: ■■ Representational State Transfer (REST): A software architecture style consisting of guidelines and best practices for creating scalable web services. Uses XML, JSON, and YAML, scaling is good and also performance. ■■ Simple object access protocol (SOAP): A protocol specification for exchanging structured information in the implementation of web services in computer network. Uses only XML and is slow plus can't scale well.

Cloud Application Management for Platforms (CAMP)

A specification designed to ease management of applications — including packaging and deployment — across public and private cloud computing platforms.

Application Normative Framework (ANF)

A subset of the ONF that will contain only the information required for a specific business application to reach the targeted level of trust. Example: A container for components of an application's security, best practices, cataloged and leveraged by the organization.

Sandbox

A testing environment that isolates untested code changes and outright experimentation from the production environment or repository, in the context of software development including Web development and revision control

Cloud Backup Service Provider

A third-party entity that manages and distributes remote, cloud-based data backup services and solutions to customers from a central data center.

Cloud Computing

A type of computing, comparable to grid computing that relies on sharing computing resources rather than having local servers or personal devices to handle applications. Cloud-specific risks include but are not limited to the following: ■■ Management plane breach: Arguably, the most important risk is a management plane (management interface) breach. Malicious users, whether internal or external, can affect the entire infrastructure that the management interface controls. ■■ Resource exhaustion: Because cloud resources are shared by definition, resource exhaustion represents a risk to customers. This can play out as being denied access to resources already provisioned or as the inability to increase resource consumption. Examples include sudden lack of CPU or network bandwidth, which can be the result of overprovisioning to tenants by the CSP. Related to resource exhaustion are the following: ■■ Denial-of-service (DoS) attacks, where a common network or other resource is saturated, leading to starvation of users ■■ Traffic analysis ■■ Manipulation or interception of data in transit ■■ Isolation control failure: Resource sharing across tenants typically requires the CSP to realize isolation controls. Isolation failure refers to the failure or nonexistence of these controls. Examples include one tenant's VM instance accessing or affecting instances of another tenant, failure to limit one user's access to the data of another user (in a software as a service [SaaS] solution), and entire IP address blocks being blacklisted as the result of one tenant's activity. ■■ Insecure or incomplete data deletion: Data erasure in most OSs is implemented by just removing directory entries rather than by reformatting the storage used. This places sensitive data at risk when that storage is reused due to the potential for recovery and exposure of that data. ■■ Control conflict risk: In a shared environment, controls that lead to more security for one stakeholder (blocking traffic) may make it less secure for another (loss of visibility). ■■ Software-related risks: Every CSP runs software, not just the SaaS providers. All software has potential vulnerabilities. From the customer's perspective, control is transferred to the CSP, which can mean an enhanced security and risk awareness, but the ultimate accountability for compliance still falls to the customer.

Cloud Server Hosting

A type of hosting in which hosting services are made available to customers on demand via the Internet. Rather than being provided by a single server or virtual server, cloud server hosting services are provided by multiple connected servers that comprise a cloud.

Security Assertion Markup Language (SAML)

A version of the SAML standard for exchanging authentication and authorization data between security domains

Federated Identity Management

An arrangement that can be made among multiple enterprises that lets subscribers use the same identification data to obtain access to the networks of all enterprises in the group Trusted 3rd Party Model: Identity Provider = 3rd Party (it issues and manages the identities for all the users in all organizations in the federation). Relying Party = Various member organizations (resource providers that share resources based on approval from the third party).

Homomorphic Encryption

Enables processing of encrypted data without the need to decrypt the data. It allows the cloud customer to upload data to a cloud service provider for processing without the requirement to decipher the data first.

Cloud Data Architect

Ensures the various storage types and mechanisms utilized within the cloud environment meet and conform to the relevant SLAs and that the storage components are functioning according to their specified requirements.

Cloud Developer

Focuses on development for the cloud infrastructure itself. This role can vary from client tools or solutions engagements, through to systems components.

Digital Rights Management (DRM)

Focuses on security and encryption to prevent unauthorized copying limit distribution to only those who pay. DRM + DLP = knockout combo. • Full DRM: This is traditional, full digital rights management using an existing tool. For example, applying rights to a file before storing it in the cloud service. As mentioned, it may break cloud provider features, such as browser preview or collaboration, unless there is some sort of integration (which is rare at the time of this writing). • Provider-based control: The cloud platform may be able to enforce controls very similar to full DRM by using native capabilities. For example, user/device/view versus edit: a policy that only allows certain users to view a file in a web browser, while other users can download and/or edit the content. Some platforms can even tie these policies to specific devices, not just on a user level. (add more from study guide)

Vendor Lock-in

Highlights where a customer may be unable to leave, migrate, or transfer to an alternate provider due to technical or non-technical constraints. Tips: Avoid proprietary data formats, ensure there are no physical limitations to moving, ensure favorable contract terms to support portability.

Personally Identifiable Information (PII)

Information that can be traced back to an individual user, e.g. your name, postal address, or e-mail address. Personal user preferences tracked by a Web site via a cookie is also considered personally identifiable when linked to other personally identifiable information provided by you online.

Redundant Array of Inexpensive Disks (RAID)

Instead of using one large disk to store data, one can use many smaller disks (because they are cheaper). An approach to using many low-cost drives as a group to improve performance, yet also provides a degree of redundancy that makes the chance of data loss remote. RAID Level 0 (Stripe set) RAID Level 1 (Mirror) RAID Level 5 (Stripe with parity)

All-or-Nothing-Transform with Reed-Solomon (AONT-RS)

Integrates the AONT and erasure coding. This method first encrypts and transforms the information and the encryption key into blocks in a way that the information cannot be recovered without using all the blocks, and then it uses the IDA to split the blocks into m shares that are distributed to different cloud storage services (the same as in SSMS).

Demilitarized Zone (DMZ)

Isolates network elements such as e-mail servers that, because they can be accessed from trustless networks, are exposed to external attacks.

Object Storage

Objects (files) are stored with additional metadata (content type, redundancy required, creation date, etc.). These objects are accessible through APIs and potentially through a web user interface.

Federal Information Processing Standard (FIPS) 140-2

Primary goal is to accredit and distinguish secure and well-architected cryptographic modules produced by private sector vendors who seek to have their solutions and services certified for use in regulated industries that collect, store, transfer, or share data that is deemed to be "sensitive" but not classified. 4 levels: ■■ Security Level 1: The lowest level of security. To meet Level 1 requirements, basic cryptographic module requirements are specified for at least one approved security function or approved algorithm. Encryption of a PC board presents an example of a Level 1 rating. ■■ Security Level 2: Enhances the required physical security mechanisms listed within Level 1 and requires that capabilities exist to illustrate evidence of tampering, including locks that are tamper proof on perimeter and internal covers to prevent unauthorized physical access to encryption keys. ■■ Security Level 3: Looks to develop the basis of Level 1 and Level 2 to include preventing the intruder from gaining access to information and data held within the cryptographic module. Additionally, physical security controls required at Level 3 should move toward detecting access attempts and responding appropriately to protect the cryptographic module. ■■ Security Level 4: Represents the highest rating. Security Level 4 provides the highest level of security, with mechanisms providing complete protection around the cryptographic module with the intent of detecting and responding to all unauthorized attempts at physical access. Upon detection, immediate zeroization of all plaintext critical security parameters (also known as CSPs but not to be confused with cloud service providers).27 Security Level 4 undergoes rigid testing to ensure its adequacy, completeness, and effectiveness.

ISO/IEC 27034-1

Represents an overview of application security. It introduces definitions, concepts, principles and processes involved in application security

Authentication

The act of identifying or verifying the eligibility of a station, originator, or individual to access specific categories of information. Typically, a measure designed to protect against fraudulent transmissions by establishing the validity of a transmission, message, station, or originator.

Anonymization

The act of permanently and completely removing personal identifiers from data, such as converting personally identifiable information (PII) into aggregated data.

Non-Repudiation

The assurance that a specific author actually did create and send a specific item to a specific recipient, and that it was successfully received. With assurance of non-repudiation, the sender of the message cannot later credibly deny having sent the message, nor can the recipient credibly claim not to have received it.

Storage Cloud

The collection of multiple distributed and connected resources responsible for storing and managing data online in the cloud.

Obfuscation

The convoluting of code to such a degree that even if the source code is obtained, it is not easily decipherable.

Cloud Provisioning

The deployment of a company's cloud computing strategy, which typically first involves selecting which applications and services will reside in the public cloud and which will remain on-site behind the firewall or in the private cloud.

Key Management

The generation, storage, distribution, deletion, archiving, and application of keys in accordance with a security policy. ■■ Access to the keys: Leading practices coupled with regulatory requirements may set specific criteria for key access, along with restricting or not permitting access to keys by CSP employees or personnel. ■■ Key storage: Secure storage for the keys is essential to safeguarding the data. In traditional in-house environments, keys were able to be stored in secure dedicated hardware. This may not always be possible in cloud environments. ■■ Backup and replication: The nature of the cloud results in data backups and replication across a number of different formats. This can affect the ability for long- and short-term key management to be maintained and managed effectively.

Vertical Cloud Computing

The optimization of cloud computing and cloud services for a particular vertical (e.g., a specific industry) or specific-use application.

Crypto-shredding

The process of deliberately destroying the encryption keys that were used to encrypt the data originally, most effective for cloud services. Crytographic sanitization helps reduces the risk of data remanance. Option 2 is overwriting.... Degaussing not an option...

Cloud Enablement

The process of making available one or more of the following services and infrastructures to create a public cloud-computing environment: cloud provider, client, and application.

Tokenization

The process of replacing sensitive data with unique identification symbols that retain all the essential information about the data without compromising its security. Requires two distinct databases, one has the raw, original data and the second has the tokens in the token database that map to the original data[mapping]. Tokenization can assist with each of these: ■■ Complying with regulations or laws ■■ Reducing the cost of compliance ■■ Mitigating risks of storing sensitive data and reducing attack vectors on that data

Dynamic Application Security Testing (DAST)

The process of testing an application or software product in an operating state (while it is being executed in memory in an operating system). Examples: Testing in runtime, user teams performing executable testing and black box testing.

Cloud Migration

The process of transitioning all or part of a company's data, applications, and services from on-site premises behind the firewall to the cloud, where the information can be provided over the Internet on an on-demand basis.

Corporate Governance

The relationship between the shareholders and other stakeholders in the organization versus the senior management of the corporation.


Ensembles d'études connexes

Chapter 40 Musculoskeletal Care Modalities

View Set

Modeling with Periodic Functions

View Set

Construction Contracts: Ch. 8 - Surety Bonds

View Set

Chapter 12 - Reviewing the Basics

View Set

US History Unit 6 Quiz 2 - World War II

View Set

Economics Chapter 20 Workbook Questions

View Set

Health Assessment Chapter 20 PrepU

View Set