Cloud Aces 3.0

¡Supera tus tareas y exámenes ahora con Quizwiz!

Distributed DoS(DDoS) attack

A Distributed DoS(DDoS) attack is a variant of the DoS attack in which several systems launch a coordinated, simultaneous DoS attack on their targets. DDoS causes denial of service to the users of one or more targeted systems. These attacks can be controlled by imposing restrictions and limiting on network resource consumption, disable unused ports and network services, install relevant latest security patches, and deploy firewall to prevent the flooding of traffic.

Management interface:

Management interface: It enables a consumer to control the use of a rented service. Management interface is a self-service interface that enables a consumer to monitor, modify, start, and stop rented service instances without manual interaction with the service provider. It facilitates consumers to prepare desired functional interface. Based on the service model (IaaS, PaaS, or SaaS), the management interface presents different management functions. For IaaS, the management interface enables consumers to manage their use of infrastructure resources. For PaaS, the management interface enables managing the use of platform resources. The SaaS management interface enables consumers to manage their use of business applications. Slide shows examples of IaaS, PaaS, and SaaS management interface functions.

Continuous Pipeline: (Modern Apps)

Modern applications adopt the processes, practices, tools, and cultural mindset of the DevOps. The continuous pipeline of integration, deployment, and delivery relies on continuous communication through tight feedback loops, continuous improvement through agile development. And also through a continuously monitored and automated platform. When these pieces are put together, modern applications deliver continuous innovation and value to the customer and the business.

Intermittent unavailability

: It is a recurring unavailability that is characterized by an outage, then availability again, then another outage, and so on.

Top drivers for private cloud

Security allocating costs Flexibility Scaling Control Compliance Risk reduction Leadership push

What is cloud computing?

"a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources e.g., networks, servers, storage, applications, and services; that can be rapidly provisioned and released with minimal management effort or service provider interaction."

Modern Architecture - 12 Factor Apps

1. Codebase:Put all the code in a single repository that belongs to a version control system 2. Dependencies:Define the dependencies of the application, automate the collection of the dependent components, and isolate the dependencies for minimizing their impact on the application 3. Config: Externalize the values use by the application by connecting to things that might change. Applications at times store configas constants in the code. But, the 12-factor App requires strict separation of configfrom code. 4. Backing services:A backing service is any service the application access over the network during its operation, example services include datastores, messaging/queueing systems and caching systems. Treat backing services same as attached resources, accessed via a URL or other locator stored in the config. 5. Build, Release,and Run: During the build stage, the code is converted in to an executable bundle of scripts, assets, and binaries known as a build. The release stage takes the build, and combines it with the current config. The resulting release contains both the build and the configand is ready for immediate execution. The run stage runs the application in the execution environment. The 12-factor application uses strict separation between the build, release, and run stages. This separation is because the build stage requires lot of work, and developers manage it. The run stage should be as simple as possible. So that application runs well, and that if a server gets restarted, the application starts up again on launch without the need for human intervention. 6. Processes: Run application as one or more stateless processes. Any data that required persistence must be stored in a statefulbacking service, typically a database. Usually the application may run on many servers for providing load balancing and fault tolerance. The right approach is that the state of the system is stored in the database and shared storage, not on the individual server instances. If a server goes down due to some reasons, another server can handle the traffic. 7. Port Binding: Access services through well-defined URLs or ports. The 12-factor application is self-containedand does not rely on runtime creation of a web-facing service. The application exports HTTP as a service by binding to a port, and listening to requests coming in on that port. For example, by using the port binding recommendations, it is possible to point to another service simply by pointing to another URL. That URL could be on the same physical machine or at a public cloud service provider. 8. Concurrency: Scale out via the process model. When anapplication runs, lot of processes are performing various tasks. By running processes independently, and the application scalesbetter. In particular, it allows doing more stuff concurrently by dynamically adding extra servers. 9. Disposability: Maximize robustness with fast startup and graceful shutdown. Factor #6 -Processes describes a stateless process that has nothing to preload, nothing to store on shutdown. This method enables applications to start and shut down quickly. Application should be robust against crashing, if it does, it should always be able to start back up cleanly. 10. Dev/ProdParity: Keep development and production environments, and everything in between as identical as possible. In recent times, organizations have a much more rapid cycle between developing a change to the application and deploying that change into production. For many organizations, this implementationhappens in a matter of hours. To facilitate that shorter cycle, it is desirable to keep a developer's local environment as similar as possible to production. 11. Logs: Treat logs as event streams. This method enables orchestration and management tools to parse these event streams and create alerts. Furthermore, this method makes easier to access and examine logs for debugging and management of the application. 12. Admin Processes: Ensure that all administrative activities become defined processes that can easily repeat by anyone. Do not leave anything that must be completed to operate or maintain the application inside someone's head. If it must be completed as a part of the administrative activity, build a process to perform by anyone.

The attributes of SDI are:

1.Abstraction and pooling:SDI abstracts and pools IT resources across heterogeneous infrastructure. IT resources are pooled to serve multiple users or consumers using a multitenant model. Multitenancy enables multiple consumers to share the pooled resources, which improves utilization of the resource pool. Resources from the pool are dynamically assigned and reassigned according to consumer demand. 2.Automated, policy-driven provisioning including data protection:In the SDI model, IT services are dynamically created and provisioned including data protection from available resources based on defined policy. If the policy changes, the environment dynamically and automatically responds with the new requested service level. 3.Unified management:Traditional multivendor, siloed environments require independent management, which is complex and time consuming. SDI provides a unified storage management interface that provides an abstract view of the IT infrastructure. Unified management provides a single control point for the entire infrastructure across all physical and virtual resources. 4.Self-service:SDI enables automated provisioning and self-service access to IT resources. It enables organizations to allow users to select services from a self-service catalog and self-provision them. 5.Metering:Measures the usage of resources per user anda metering system reports the values. Metering helps in controlling and optimizing resource usage and generating bills for the utilized resources. 6.Open and extensible:An SDI environment is open and easy to extend, which enables adding new capabilities. An extensible architecture enables integrating multivendor resources, and external management interfaces and applications into the SDI environment by using APIs.

The common functions of SDN controller are:

1.Network Discovery: SDN controller interacts with network components to discover information on their configuration, topology, capacity, utilization, and performance. Discovery provides visibility to the network infrastructure and helps in bringing the components under the control and management of SDN controller. 2.Network Component Management: SDN controller configures network components to maintain interconnections among the components and isolate the network traffic between user groups or tenants. Network traffic isolation is provided through creation of virtual networks. 3.Network Flow Management: SDN controller controls the network traffic flow between the components and chooses the optimal path for network traffic.

nine benefits of converged infrastructure. They are:

1.Simplicity: Many of the benefits of converged infrastructure are from the simplicity and standardization of working with an integrated platform rather than multiple technology stacks. The entire process of deploying infrastructure is simpler and easier that covers planning, purchasing, installation, upgrades, troubleshooting, performance management, and vendor management. 2.Performance: In a highly virtualized environment, server utilization may already be high. Converged infrastructure extends this efficiency to storage and network port utilization and enables better performance optimization of the overall infrastructure. 3.Availability:Greater reliability means higher availability of infrastructure, applications, and services. Converged infrastructure enables IT to meet its service-level agreements and the business to meet its performance promises to customers. 4.Speed:Converged infrastructure can be deployed in record time. If the new infrastructure is for applications development, it can be spun up almost instantaneously, which means that developers can do their jobs faster. IT can respond to business requests with, "You can have it now," rather than, "You can have it in a few months." And the time to market of technology-based offerings increases. 5.Scalability:With converged infrastructure, it is also easier to expand or shrink available resources with changing workloads and requirements. 6.Staffing:Converged infrastructure requires less IT staff to operate and manage it. It reducesthe cost spent and increases the ability to support business and infrastructure growth without adding staff.If IT professionals spend less time on the mechanics of infrastructure integration and management, they have more time for more value-adding activities, and they can be increasingly customer-facing and responsive. 7.Risk: Converged infrastructure reduces infrastructure supply chain risk through procurement control,testing,and certification of equipment. It reduces operational risk through robust and comprehensive tools for infrastructure control, including security, and automation to minimize human error. Converged infrastructure also reduces risk to business continuity through high availability and reliability, less disruptive upgrades, and a solid platform for disaster recovery. 8.Innovation: Converged infrastructure facilitates business innovation in two powerful ways. One, it provides a simplified path to the cloud.Abusiness can experiment with and use a vast and growing array of innovative and specialized software and services. Two, when software developers have computing environments on demand, they can experiment more, prototype more, iterate with their business partners, and discover superior business solutions. 9.Cost:The cost advantages of converged infrastructure can be sliced and diced many ways, but you should expect to realize and measure savings in four basic areas. They are procurement, physical operations, infrastructure management, and staff.

The functions of the SDS controller are:

1.Storage Discovery: The SDS controller discovers various types of physical storage systems available in a data center. It is to gather data about the components and bring them under its control and management. Data collected during the discovery includes information on the storage pools and the storage ports for each storage system. 2.Resource Abstraction and Pooling: The SDS controller abstracts the physical storage systems into virtual storage systems. The administrators configure the virtual storage pools according to the policies. The SDS controller also enables an administrator to define storage services for the end users. The SDS controller commonly provides three types of interfaces to configure and monitor the SDS environment and provision virtual storage resources. They are command line interface (CLI), graphical user interface (GUI), and application programming interface (API). 3.Service Provisioning: The defined storage services are typically visible and accessible to the end users through a service catalog. The service catalog allows the end user to specify a compute system for which virtual storage must be provisioned. A virtual storage system and virtual storage pool from which the storage has to be derived. The SDS controller automates the storage provisioning tasks and delivers virtual storage resources based on the requested services.

Backup

A backup is an additional copy of production data, created and retained for the sole purpose of recovering the lost or corrupted data. With the growing business and the regulatory demands for data storage, retention, and availability, cloud service providers face the task of backing up an ever-increasing amount of data. This task becomes more challenging with the growth of data, reduced IT budgets, and less time available for taking backups. Moreover, service providers need fast backup and recovery of data to meet their service level agreements. The amount of data loss and downtime that a business can endure in terms of RPO and RTO are the primary considerations in selecting and implementing a specific backup strategy. RPO specifies the time interval between two backups and the amount of data loss a customer can tolerate. For example, if a service requires an RPO of 24 hours, the data need to be backed up every 24 hours. RTO relates to the time taken by the recovery process. To meet the defined RTO, the service provider should choose the appropriate backup media or backup target to minimize the recovery time.

Cloud Service Lifecycle

A cloud service lifecycle describes the life of a cloud service from planning and optimizing the service. The service lifecycle aligns with the business strategy through the design and deployment of the service. From inception to retirement, a cloud service goes through four key phases: service planning, service creation, service operation, and service termination. These phases are the basic steps and are based on the business needs, organizations can add or modify other steps as well.

Cloud bursting:

A common usage scenario of a hybrid cloud is "cloud bursting", in which anorganization uses a private cloud for normal workloads, but optionally accesses a public cloud to meet transient higher workload requirements. Cloud bursting allows consumers to temporarily obtain public cloud resources in aconvenient and cost-effective manner, and to enjoy a greater elasticity than their own infrastructure would permit. For example, an application may encounter a surge in workload during certainperiods and would require additional resources to handle the workload efficiently. The application can get additional resources from a public cloud for a limited timeperiod to handle the higher workload.

What is community cloud

A community cloud is a cloud infrastructure that is set up for the sole use by a group of organizations with common goals or requirements. The organizations participating in the community typically share the cost of the community cloud service. If various organizations operate under common guidelines and have similar requirements, they can all share the same cloud infrastructure and lower their individual investments. Since the costs are shared by lesser consumers than in a public cloud, this option may be more expensive. However, a community cloud may offer a higher level of control and protection against external threats than a public cloud. There are two variants of a community cloud: on-premise and externallyhosted.

data replication

A data replication solution is one of the key data protection solution that enables organizations to achieve business continuity, high availability, and data protection. Data replication is the process of creating an exact replica of data. These replicas are used to restore and restart operations if data loss occurs. For example, if a production VM goes down, then the replica VM can be used to restart the production operations with minimum disruption. Based on business requirements, data can be replicated to one or more locations. For example, data can be replicated within a data center, between data centers, from a data center to a cloud, or between clouds.

Hybrid Cloud

A hybrid cloud is composed of two or more individual clouds, each of which can be private, community, or public clouds. There can be several possible compositions of a hybrid cloud as each constituent cloud may be of one of the five variants as discussed previously. As a result, each hybrid cloud has different properties in terms of parameters such as performance, cost, security, and so on. A hybrid cloud may change over the period of time as component clouds join and leave. In a hybrid cloud environment, the component clouds are combined through the use of open or proprietary technology such as interoperable standards, architectures, protocols, data formats, application programming interfaces (APIs), and so on.

Cloud Charge Back Models

A list of common chargeback models along with their descriptions areprovided below. •Pay-as-you-go: Metering and pricing are based on the consumption of cloud resources by the consumers. Consumers do not pay for unused resources. For example. , the Some cloud providersare offering pricingbased on monthly, hourly, or per second basis of consumption. Thus offering extreme flexibility and cost benefit to the customers. •Subscription by time: Consumers are billed for a subscription period. The cost of providing a cloud service for the subscription period is divided among a predefined number of consumers. For example, in a private cloud, if three business units are subscribing to a service that costs $60,000 a month to provide. Then the chargeback per business unit is $20,000 for the month. •Subscription by peak usage: Consumers are billed according to their peak usage of IT resources for a subscription period. For example, a provider may charge a consumer for their share of peak usage of network bandwidth. •Fixed cost or prepay: Consumers commit upfront on the required cloud resources for the committed period such as one year or three years. They pay fixed charge periodically through a billing cycle for the service they use, regardless of the utilization of resources. •User-based: Pricing is based on the identity of a user of cloud service. In this model, the number of users logged in is tracked and billed,basedonthatnumber.

Private Cloud

A private cloud is a cloud infrastructure that is set up for the sole use of a particular organization. The cloud services implemented on the private cloud are dedicated to consumers such as the departments and business units within the organization. Many organizations may not wish to adopt public clouds as they are accessed over the open Internet and used by the general public. With a public cloud, an organization may have concerns related to privacy, external threats, and lack of control over the IT resources and data. When compared to a public cloud, a private cloud offers organizations a greater degree of privacy and control over the cloud infrastructure, applications, and data. The private cloud model is typically adopted by larger-sized organizations that have the resources to deploy and operate private clouds.

Public Cloud

A public cloud is a cloud infrastructure deployed by a provider to offer cloud services to the general public and/or organizations over the Internet. In the public cloud model, there may be multiple tenants who share common cloud resources. A provider typically has default service levels for all consumers of the public cloud. The provider may migrate a consumer's workload at any time, to any location. Some providers may optionally provide features that enable a consumer to configure their account with specific location restrictions. Public cloud services may be free, subscription-based or provided on a pay-per-use model.

Resource Pooling

A resource pool is a logical abstraction of the aggregated computing resources, such as processing power, memory capacity, storage, and network bandwidth that is managed collectively. Cloud services obtain computing resources from resource pools. Resources from the resource pools are dynamically allocated according to consumer demand, up to a limit defined for each cloud service. The allocated resources are returned to the pool when they are released by consumers, making them available for reallocation. The figure on the slide shows the allocation of resources from a resource pool to service A and service B that are assigned to consumer A and consumer B respectively.

Snapshot

A snapshot is a virtual copy of a set of files, VM, or LUN as they appeared at a specific point-in-time (PIT). A point-in-time copy of data contains a consistent image of the data as it appeared ata given point in time.Snapshots can establish recovery points in just a small fraction of time and can significantly reduce RPO by supporting more frequent recovery points. If a file is lost or corrupted, it can typically be restored from the latest snapshot data in just a few seconds. A file system (FS) snapshot creates a copy of a file system at a specific point-in-time (as shown in the slide), even while the original file system continues to be updated and used normally. FS snapshot is a pointer-based replica that requires a fraction of the space used by the production FS. It uses the Copy on First Write (CoFW) principle to create snapshots. When a snapshot is created, a bitmap and blockmap are created in the metadata of the snapshot FS. The bitmap is used to keep track of blocks that are changed on the production FS after the snapshot creation. The blockmap is used to indicate the exact address from which the data is to be read when the data is accessed from the snapshot FS. Immediately after the creation of the FS snapshot, all reads from the snapshot are actually served by reading the production FS. In a CoFWmechanism, if a write I/O is issued to the production FS for the first time after the creation of a snapshot, the I/O is held and the original data of production FS corresponding to that location is moved to the snapshot FS. Then, the write is allowed to the production FS. The bitmap and the blockmap are updated accordingly. The subsequent writes to the same location will not initiate the CoFWactivity. To read from the snapshot FS, the bitmap is consulted. If the bit is 0, then the read will be directed to the production FS. If the bit is 1, then the block address will be obtained from the blockmap and the data will be read from that address on the snapshot FS. Read requests from the production FS work as normal.

virtual machine (VM)

A virtual machine (VM) is a logical compute system with virtual hardware on which a supported guest OS and its applications run. A VM is created by a hosted or a bare-metal hypervisor installed on a physical compute system. An OS, called a "guest OS", is installed on the VM in the same way it is installed on a physical compute system. From the perspective of the guest OS, the VM appears as a physical compute system. A VM has a self-contained operating environment, comprising OS, applications, and virtual hardware, such as a virtual processor, virtual memory, virtual storage, and virtual network resources. As discussed previously, a dedicated virtual machine manager (VMM) is responsible for the execution of a VM. Each VM has its own configuration for hardware, software, network, and security. The VM behaves like a physical compute system, but does not have direct access either to the underlying host OS (when a hosted hypervisor is used) or to the hardware of the physical compute system on which it is created. The hypervisor translates the VM's resource requests and maps the virtual hardware of the VM to the hardware of the physical compute system. For example, a VM's I/O requests that to a virtual disk drive are translated by the hypervisor and mapped to a file on the physical compute system's disk drive.

Cloud Portal

A web portal that presents service catalog and cloud interfaces, enabling consumers to order and manage cloud services. A cloud portal commonly presents following elements: •Service catalog •Management interface •Link to functional interface

Account hijacking

Account hijacking refers to a scenario in which an attacker gain access to cloud consumers' accounts using methods such as phishing or installing keystroke-logging malware on consumers' virtual machinesto conduct malicious or unauthorized activity. This threat can have a huge impact in cloud environment as the services are accessed via network and the resources are shared between multiple consumers. The cloud service instances might be used as base to gain access and modify the information or redirect the users to illegitimate sites.

Active-Active clustering:

Active-Active clustering: In this type of clustering, the compute systems in a cluster are all active participants and run the same service of their clients. The active-active cluster balances requests for service among the compute systems. If one of the compute systems fails, the surviving systems take the load of the failed system. This method enhances both the performance and the availability of a service.

Active-passive clustering:

Active-passive clustering: In this type of clustering, the service runs on one or more compute systems, and the passive compute system just waits for a failover. If the active compute systemfails, the service that was running on the active compute system fails over to the passive compute system. Active-passive clustering does not provide performance improvement like active-active clustering.

Administrative mechanisms

Administrative mechanisms include security and personnel policies or standard procedures to direct the safe execution of various operations.This mechanism includes having regulatorycompliance in place to meet the requirements of the customers and the government policies,and procedures of how the data is handled in cloud. It also includescontracts and SLAs, background verification of the employees, and trainings to create awareness of the security standards and the best practices to prevent security breaches.

Cloud Roles

Adopting cloud enables digital transformation, therefore new roles need to be created that performs tasks related to cloud services. Examples of tasks are service definition and creation, service administration and management, service governance and policy information, and service consumer management. Some of these tasks can be combined to become the responsibility of an individual or organizational role. A few examples of new roles required to perform tasks within a cloud environment include service manager, account manager, cloud architect, and service operation manager. •A service manager is responsible for understanding consumers'needs and industry trends to drive an effective product strategy. The service manager ensures that IT delivers cost-competitive services that have the features that clients need. The service manager is also responsible for managing consumers' expectations of product offerings and serves as key interface between clients and IT staff. •Aservice account manager supports service managers in service planning, development, and deployment. The service account manager maintains day-to-day contact to ensure thatconsumers' needs are met. They also assist clients with demand planning and communicate service offerings. •A cloud architect is responsible for creating detailed designs for the cloud infrastructure. •The service operations manager is responsible to streamline service delivery and execution. Service operations manager is also responsible to provide early warning for service issues, such as emerging capacity constraints, or unexpected increase in cost. The service operations manager also coordinates with the architecture team to define technology roadmaps and ensure thatservice level objectives are met. Apart from the above roles, other roles such as cloud engineer, the DevOps engineer, and cloud administrator are also required. •Acloud engineer is responsible for designing, planning, managing, maintaining, and supporting the cloud infrastructure. •A cloud automation engineer is responsible for design and implementation of cloud provisioning processes for delivering cloud services. •A vendor relationship manager is responsible to understand service needs of LOBs, determines which needs are good candidates for CSP, and also works with service managers. •A DevOps engineer is responsible for development, testing, and operation of an

Advanced Persistent Threats (APTs)

Advanced Persistent Threats (APTs) are attacks in which an unauthorized person gains access to a company's network, to steal data such as intellectual property.

Build-your-own(BYO)

Advantages: When an organization builds their own solution, they can control the complete system. They can completely customize the solution as per the business requirements. The software that is build in the solution isspecific to the needs of the organization, that are analyzed. Disadvantages: Building the solution consumes lot of time, resources, and cost. It also takes lot of time to understand the requirements, technology, and learn the required skills. If the organization decides to build, then they have to make sure that they are in the same phase of the technology development with the IT sector. For example, if the IT era is all about virtualization, they should not build a traditional physical solution. Once the product is built they have to test the product, to make sure the requirements are satisfied. The building of the solution needs to bequick before the requirements changes.

Buy:

Advantages: When an organization decides to buy a solution, it saves time, resource, and cost. It is because the solution is readily available in the market. It saves time on solution installation and deployment. The solution provider also supports the consumer with customer support at anytime, to learn the solution and to resolve any issues. The solution is designed in such a way that the development is in-progress to meet the customer requirements. The solution is already configured and tested. Disadvantages: The source code's rights of the solution belongs to the provider. Hence, the solution cannot be customized. The solution may not cover the support for all the demands or requirements. The functionalities may vary slightly. The buyer needs to be dependent to the provider for any support or issue.

Agile

Agile is an iterative and incremental software development method.Examplesof agile methods are scrum, extreme programming, lean development, and so on. The agile methodologies are effectiveat improving time to delivery, feature functionality, and quality. The agile methodologies are iterative, value quick delivery of working software for user review and testing, and implemented within the development organization. Developers frequently build their own infrastructure as needed to investigate and test a particular product feature. These environments can be tweaked to fit the desires of the developer; however, transition to production availability requires operations to reproduce the developer's environment. The smooth transition from development into operations is affected in following circumstances: •Specific configuration of a development environment is undocumented •Development environment conflicts with the configuration of another environment •Development environment deviates from established standards If a feature requires new equipment, there is often delays to accommodate budgets and maintenance windows. Although these problems are common between development and operations teams, overcoming them is a core principle of theDevOps practices. Transforming to the DevOps practices takes a clear vision, and more than anything else, it takes commitment from its employeesand management. DevOps culture in cloud brings the following benefits: •Faster application development and delivery to meet the business needs •User demands that are quickly incorporated into the software •Reduced cost for development, deployment, testing, and operations

Amazon CloudWatch

Amazon CloudWatch -Monitors service for AWS cloud resourcesand the applications you run on AWS. Also collects and monitor log files, set alarms, and automatically react to changes in your AWS resources.

Application Programming Interface (API)

An orchestrator commonly interacts with other software components and devices in a cloud infrastructure using application programming interfaces (APIs)defined for these components. API is a source-code-based specification intended to use. The software components use API source code as an interface to communicate with each other. It specifies a set of system or component functions that can be called from a software component to interact with other software components.

Packaged applications:

An organization may migrate standard packaged applications, such as email and collaboration software out of the private cloud to a public cloud. Thisfrees up existing resources for higher value projects and applications. In some cases, the existing applications may have to be rewritten and/or reconfigured for the public cloud platform. However, dedicatedhybrid cloud services enable existing applications to run in the hybrid cloud without the need to rewrite or re-architectthem.

.DELL AppAssure

AppAssureprovidesBackup, Replication and Recovery. Reduce downtime and cut RTOs from hours to minutes or seconds with AppAssure. It offers-Flexible recovery: choose local, offsite or disaster recovery. Recover to a virtual or physical machine. Restore at any level: single file, message or data object, or restore a complete machine. Fast, small or large-scale recovery:Recover anywhere, to any machine. Choose local or off-site recoveries to any available machines, even to dissimilar hardware. Reduce data loss:After a disaster, quickly recover from up to 288 daily incremental backups for nearly unlimited RPO granularity. Deduplicatefor efficiency:AppAssureintegrated deduplication and file compression cuts storage requirements by up to 80 percent. Access to remote installation services:Dell Remote Installation Services include installation and configuration options for customers that require assistance deploying AppAssuresoftware.

AppNeta Performance Manager

AppNeta Performance Manager -provides complete performance visibility into the usage, delivery,and experience of business-critical cloud and theSaaS applications from the end user's perspective

Insecure APIS

Application programming interfaces (APIs) are used extensively in cloud infrastructures to perform activities such as resource provisioning, configuration, monitoring, management, and orchestration. These APIs may be open or proprietary. The security of cloud services depends upon the security of these APIs. An attacker may exploit vulnerability in an API to breach a cloud infrastructure's perimeter and carry out an attack. Therefore, APIs must be designed and developed following security best practices such as requiring authentication and authorization, encryption, and avoiding buffer overflows. Providers must perform the security review of the APIs. Also, access to the APIs must be restricted to authorized users. These practices provide protection against both accidental and malicious attempts to bypass security.

Application Hardening

Applications are themost difficult part of the cloud environment to secure because of its complexities. Application hardeningis a securitypracticefollowed during an application development, with agoal of preventing the exploitation of vulnerabilities that are typically introduced during the development cycle. The most secure applicationsare those applications, which have been built from the start keeping security in mind. Application architects and developers must focus on identifying the security policies and proceduresto define what applications and the application users are allowed to do, and what ports and services the applications can access. Many web applications use authentication mechanism to verify the identity of auser. So, to secure the credentials from eavesdropping,the architects,and developers must consider how the credentials are transmitted over the network, how they are stored and verified. Implement ACLs to restrict which applications can access what resources and the type of operations the applications can perform on the resources. Applications dealing with sensitive data must be designed in such a way that the data remains safe and unaltered. Also, it is important to secure the third party applications and tools that may be used.Because,if they are vulnerable, they can open doors to greater security breach.

Retire:

Applications that are still running but have long outlasted their value to the organization. The Retire approach recommends removing these applications from active service. Those applications deemed to offer only marginal value can be consolidated to save on infrastructure and operations costs.

Modern Application Characteristics

Architecture Collaboration Continuous Pipeline Multiplatform Resilience Scalability

Application development and testing

As discussed in the section on 'cloud computing Benefits',organizations require significant capital expenditure on IT resources while developing and testing new applications. Also, the applications need to be tested for scalability and under heavy workload, which might require a large amount of IT resources for a short period of time.Further, if the longevity of the application is limited, it would not justify the expenditure made in developing it. In such cases, organizations may use public cloud resources for the development and testing of applications, before incurring the capital expenditure associated with launching it. Once the organization establishes a steady-state workload pattern and the longevity of the application, it may choose to bring the application into the private cloud environment.

3 Cloud Security Concepts

Authentication is a process to ensure that cloud 'users' or cloud 'assets' are who they claim to be by verifying their identity credentials. The user has to proveidentity to the cloud provider to access the data stored in the cloud.A user may be authenticated using a single-factor or multifactor method. Single-factor authentication involves the use of only one factor, such as a password. Multifactor authentication uses more than one factor to authenticate a user. Authorization isa process of determiningthe privileges thata user/device/applicationhas,to access a particular service or a resource in a cloud computing environment. For example, a user with administrator's privileges is authorized to access more services or resources compared to a user with non-administratorprivileges. InWindows, authorization is performed using Access Control Lists (ACL) to determine the privileges. ACL is a list of access rights or privileges that a user has for the resources in a cloud environment.For example, the administrator can have 'read/write' access and a normal user can have 'read only' access.Authorization should be performed only if the authentication is successful. Auditing refers to the logging of all transactions in assessing the effectiveness of security controls in a cloud computing environment. It helps to validate the behavior of the infrastructure components, and to perform forensics, debugging, and monitoring activities.

Availability monitoring

Availability monitoring involves tracking availability of services and infrastructure components during their intended operational time. It helps to identify a failure of any component that may lead to service unavailability or degraded performance. Availability monitoring also helps an administrator to proactively identify failing services and plan corrective action to maintain SLA requirements.

Backup as a service

Backup as a service enables organizations to procure backup services on-demand in the cloud. Organizations can build their own cloud infrastructure and provide backup services on demand to their employees/users. Someorganizations prefer hybrid cloud option for their backup strategy, keeping a local backup copy in their private cloud and using public cloud for keeping their remote copy for DR purpose. For providing backup as a service, the organizations and service providers should havenecessary backup technologies in place in order to meet the required service levels. Backup as a service enables individual consumers or organizations to reduce their backup management overhead. It also enables the individual consumer/user to perform backup and recovery anytime, from anywhere, using a network connection. Consumers do not need to invest in capital equipment in order to implement and manage their backup infrastructure. These infrastructure resources are rented without obtaining ownership of the resources. Based on the consumer demand, backups can be scheduled and infrastructure resources can be allocated with a metering service. This will help to monitor and report resource consumption. Many organizations' remote and branch offices have limited or no backup in place. Mobile workers represent a particular risk because of the increased possibility of lost or stolen devices. Backing up to cloud ensures regular and automated backup of data. Cloud computing gives consumers the flexibility to select a backup technology, based on their requirement, and quickly move to a different technology when their backup requirement changes.

Business Continuity

Business continuity is a set of processes that includes all activities that a business must perform to mitigate the impact of service outage.

Business continuity

Business continuity is a set of processes that includes all activities that a business must perform to mitigate the impact of service outage. BC entails preparing for, responding to, and recovering from a system outage that adversely affects business operations. It describes the processes and procedures an organization or service provider establishes to ensure that essential functions can continue during and after a disaster to meet the required service level agreement SLA.

Thick LUN:

Thick LUN is the one whose space is fully allocated upon creation. However, when a thick LUN is created its entire capacity is reserved and allocated in the pool for use by that LUN.

CDP

CDP local and remote replication operations where the write splitter is deployedat the compute system. Typically the replica is synchronized with the source, and then the replication process starts. After the replication starts, all the writes from the compute system to the source (production volume) are split into two copies. One copy is sent to the local CDP appliance at the source site, and the other copy is sent to the production volume. Then the local appliance writes the data to the journal at the source site and the data in turn is written to the local replica. If a file is accidently deleted, or the file is corrupted, the local journal enables to recover the application data to any PIT.

Capacity monitoring

Capacity monitoring involves tracking the amount of cloud infrastructure resources used and available. It ensures theavailability of sufficient resources and prevents service unavailability or performance degradation. It examines the following: •Used and unused capacity available in a resource pool •Utilization of capacity by service instances •Capacity limit for each service •Dynamic allocation of storage capacity to thin LUNs •Over commitment of memory and processor cycles, and so on.

Checkpointing

Checkpointing isa technique to periodically save or store a copy of the state of a process or an application. It enables a process to roll back to a previous state if a failure and continue tasks from that state. The saved copy of the application or process states are called checkpoints. Checkpointing allows an application or process to resume from a previous fault-free state after a failure, rather than restarting again. This technique is useful for processes such as data backup and data migration that run for long period of time.

Cloud administrator-specific view

Cloud administrator-specific view: It is only visible to the cloudadministratorsand contains details of all service assets and support services to provision a cloud service. It is useful to track order status, and check resource bundling issues and orchestration failures.

Cloud based-firewalls

Cloud based-firewalls have also emerged with the introduction of cloud computing, to secure the infrastructure and applications residing in the on-premise data center or cloud. Cloud-based firewall implemented at the network-level protectsthe cloud infrastructure by residing on the cloud server. Itfilters the traffic going in and out of the cloud network. This type of firewall is offered in theIaaS and PaaS models. Cloud-based firewall can also be implemented at host level and provided as a service called Firewall-as-a-Service (FWaaS) in which, they protect the customer's systems similar to on-premise firewall. They provide network security, application control,and visibility to the customers. They are remotely located from the customer's organization, i.e. in the cloud. This type of firewall is offered in SaaS and Security-as-a-Service (SECaaS) models.

Cloud bursting

Cloud bursting represents one of the key advantages of hybrid cloud technology. Business will be able to ensure service availability as well as realize cost savings by not having to invest in excess infrastructure to meet peak demands. But it is important to consider factors such as compliance, load balancing, application portability and compatibility and performance implication due to latency.

Increased collaboration

Cloud computing enables collaboration between disparate groups of people by allowing them to share resources and information and access them simultaneously from any location. For example, employees in an organization can place a document centrally in the cloud enabling them to view and work on it at the same time. This eliminates the need to send files back and forth via email.

ReduceIT costs:

Cloud computing enables the consumers to rent any required IT resources based on the pay-per-use or subscription pricing. This reduces a consumer's IT capital expenditure as investment is required only for the resources needed to access the cloud services. Further, the consumer rents only those resources from the cloud that are required, thereby eliminating the underutilized resources. Additionally, the expenses associated with IT infrastructure configuration, management, floor space, power, and cooling are reduced.

High availability

Cloud computing has the ability to ensure resource availability at varying levels depending on the consumer's policy and application priority. Redundant infrastructure components (compute systems, network paths, and storage equipment, along with clustered software) enable fault tolerance for cloud deployments. These techniques can encompass multiple datacenters located in different geographic regions, which prevents data unavailability due to regional failures.

Business agility:

Cloud computing provides the capability to provision IT resources quickly and at any time, thereby considerably reducing the time required to deploy new applications and services. This enables businesses to reduce the time-to-market and to respond more quickly to market changes.

Security

Cloud computing security is a set of practices and policies designed to protect the applications, data and the infrastructure associated with cloud computing. The unique features of the cloud give rise to various security threats and one of the issues that arises among the customers is trust. Trust depends on the degree of control and visibility available to the customers for their data stored in the cloud. The cloud security mechanism plays avery important role in protecting the information and information systems in cloud from unauthorized access, use,information disclosure, disruption, modification, or destruction.The power of cloud security can be harnessed with Governance, Risk and Compliance (GRC) framework.

Cloud service availability

Cloud service availability refers to the ability of a service to perform its agreed function according to business requirements and customer expectations during its operation. Cloud service providers need to design and build their infrastructure to maximize the availability of the service, while minimizing the impact of an outage on consumers. Cloud service availability depends primarily on the reliability of the cloud infrastructure such as compute, storage, and network components, business applications that are used to create cloud services, and the availability of data. A simple mathematical expression of service availability is based on the agreedservice time and the downtime. Cloud Service Availability % = (Agreed Service time -Downtime) / AgreedService time

Cloud service

Cloud service is a means of delivering IT resources to the consumers to enable them to achieve the desired business results and outcomes without having any liabilities such as risks and costs associated with owning the resources. Examples of services are application hosting, storage capacity, file services, and email. The service components areaccessibleto applications and consumers. The key service components include a service catalog and service portal. The service catalog presents the information about all the IT resources being offered as services. The service catalog is a database of information about the services and includes a variety of information about the services, including the description of the services, the types of services, cost, supported SLAs, and security mechanisms.

Cloud service management

Cloud service management has a service-based focus, meaning that the management functions are linked to the service requirements and service level agreement (SLA). For example, when a service provider needs to make a change in a service. The service management ensures that the cloud infrastructure resources are configured, modified, or provisioned to support the change. Likewise, it ensures that cloud infrastructure resources and management processes are appropriate so that services can maintain the required service level. Also, cloud service management ensures that the cloud services are delivered and operated optimally by using as few resources as needed.

Cloud service management:

Cloud service management: Management in the cloud has a service-based focus commensurate with the requirements of each cloud service rather than an asset-based focus. It is based on a holistic approach that must be able to span across all the IT assets in a cloud infrastructure. Depending on the size of a cloud environment, service management may encompass a massive IT infrastructure. Thismassive IT infrastructure comprises multivendor assets, various technologies, and multiple data centers. Cloudservice management must be optimized to handle increased flexibility, complexity, data access, change rates, and risk exposure. If it is not optimized, it may lead to service outages and SLA violations. To operate sustainably in a cloud environment, service management must rely on automation and workflow orchestration. Cloud service management may still follow the traditional IT service management processes such as ITIL. However, the processes should support cloud rapid deployment, orchestration, elasticity, and service mobility.

Cloud service-centric data management:

Cloud service-centric data management:Here, the data protection solutions are offered to the users as cloud computing services or cloud services. A service level agreement (SLA) document is used to define the service level targets including the committed level of data availability. Upon a request, the data protection services are provided to the application or service users as per SLA like a public utility. The IT department of an organization commonly takes the role of a cloud provider and offers services for convenient consumption by the users. In addition, the solutions are based on a unified approach to data management instead of data protection alone. Data management includes data protection, distributed access across geographies and client devices, and on-demand data access.

Cloud Services

Cloud services are a pack of IT resources made available to the cloud consumer by the cloud service provider. Once constitute IT resources are provisioned and configured, a service is instantiated. The instantiated service is called a service instance.

Cloud Service Interfaces

Cloud services are consumed through cloud interfaces, which enable computing activities and administration of rented service instances. Theseinterfaces are categorized into two types (source: NIST SP 500 -291) They are as follows:

Cloud-based replication

Cloud-based replication helps organizations to mitigate the risk associated with outages at the consumer production data center. Organization of all levels are looking for the cloud to be a part of the business continuity.Replicating applicationdataand VM to the cloud enable organization to restart the application from the cloud and also allow to restore the data from any location. Alsothe data and the VM replicated to the cloud is hardware independent; this further reduces the recovery time. Replicationto the cloud can be performed using compute-based, network-based, and storage-based replication techniques. Typically when replication occurs, the data is encrypted and compressed at the production environment to improve the security of data and reduce the network bandwidth requirements.

Cloud-enabled:

Cloud-enabled infrastructure provides the capability to develop, deploy, and manage modern applications. The infrastructure supports these applications to run and scale in a reliable and predictable manner. One of the key attributes of Cloud enabled platforms and infrastructure its portability enabling qualifying applications to migrate across different environments with little to no downtime. For example, a startup company made a quick start in public cloud. Later, they migrate their applications to the private cloud soon after the local infrastructure setup. Similarly, an organization may decide to move their applications with seasonal and fluctuating load to public cloud for utilizing the dynamic scalable infrastructure. The migration can also happen between different public cloud providers. The other advantage is the cloud environment also brings developers and operations teams closer together. The centralized nature of cloud-enable infrastructure provides the DevOps automation with a standard and centralized platform for testing, developing, and production.

Cloud-native applications

Cloud-native applications help organizations bring new ideas to market faster and respond sooner to customer demands. The Cloud-native development platforms support development and timely delivery of these applications.

Cloud-native platform

Cloud-native platform is an integrated stack -from infrastructure through application development framework that is optimized for rapidly scaling modern application to meet demand. Cloud-native platform supports application developers with multiple languages -such as Java, Ruby, Python, and Node.js, frameworks -such as Spring, and middleware -such as MySQL, RabbitMQ, Redis. It enables IT to develop, deploy, scale, and manage software delivery more quickly and reliably.

CloudControl Matrix (CCM) of CloudService Alliance (CSA)

CloudControl Matrix (CCM) of CloudService Alliance (CSA) provides a control framework that gives a detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance. CSA CCM provides organizations with the requiredstructure, detail, and clarity related to information security, personalized to the cloud industry. It also provides standardizedsecurity and operational risk management measures. The Cloud Security Alliance suggests that time must be taken to do the following: •Determine what the cloud service provider supplies for compliance requirements in terms of controls and audits. •Review their policies,andcompare them to the company's internal policies. •Make sure there is a thorough understanding of the application data flows and traffic/user patterns. •Clearly understand the roles and responsibilities of the provider

Compliance

Compliance is the act of adhering toand demonstrating adherence toexternal laws and regulations as well as to corporate policies and procedures. Adhering to policies and regulations applies to both cloud service provider and consumers. Loss of compliance may occur if they donot adhere to the policies and the regulations. To meet a consumer's compliance requirements, the cloud service provider must have compliance management in place. Compliance managementensures that the cloud services, service creation processes, and cloud infrastructure resources adhere to relevant policies and legal requirements. Compliance management activity includes periodically reviewing compliance enforcement in infrastructure resources and services. If it identifies any deviation from compliance requirements, it initiates corrective actions. The compliance checklist may include: •Ensure adherence to thesecurity demands that are expressed in contracts •Review the security and privacy controls that are in place to ensure that appropriate controls are applied to the highest value and highest risk assets •Ensure adherence to the industry regulations related to the flow of data out of an organization There are compliance standards that specify a checklist of broadlyacceptedbest practices specific to certain industries to ensure that the IT infrastructure,applications and data are organized, managed,and monitored according to the rules and regulations.

Compute virtualization

Compute virtualization is a technique of abstracting the physical hardware of a compute system from the operating system (OS) and applications. The decoupling of the physical hardware from the OS and applications enables multiple operating systems to run concurrently on a single or clustered physical compute systems. Compute virtualization enables the creation of virtual compute systems called virtual machines (VMs). Each VM runs an OS and applications, and is isolated from the other VMs on the same compute system. Compute virtualization is achieved by a hypervisor, which is virtualization software that is installed on a physical compute system. The hypervisor provides virtual hardware resources, such as CPU, memory, storage, and network resources to all the VMs. Depending on the hardware capabilities, many VMs can be created on a single physical compute system.

Compute virtualization

Compute virtualization software enables creating and managing several VMs—each with a different OS of its own—on a physical compute system or on a compute cluster. VMs are created on a compute system, and provisioned to different users to deploy their applications. The VM hardware and software are configured to meet the application's requirements. The different VMs are isolated from each other, so that the applications and the services running on one VM do not interfere with those running on other VMs. The isolation also provides fault tolerance so that if one VM crashes, the other VMs remain unaffected.

The 3 Cloud Security Concepts

Confidentiality, Integrity, Availability

What is SaaS?

Consumer can leverage the providers applications running on the cloud infrastructure. Examples: Salesforce Dell EMC Syncplicity Google Apps Microsoft Office 365

What is PaaS?

Consumer has ability to deploy consumer-created or acquired applications on the providers infrastructure. Examples: Pivotal Cloudfoundry Google App Engine AWS Elastic Beanstalk Microsoft Azure

Consumer-specific view:

Consumer-specific view: It comprises a description of services including the business processes they support, consumer-facing value of the services, and service policies and rules. It also provides information on rented service instances and utilized resources, incident status, and billing reports. It enables consumers to request any changes to services, decommission service instances, and use technical support services.

What is IaaS?

Consumers hire infrastructure components such as servers, storage, network. Examples: Amazon EC2, S3 VMware vcloud Air Virtustream Google Compute Engine

Container

Container is a lightweight, stand-alone, executable package. It consists of whole runtime environment such as an application, and its libraries, binaries, and configuration files needed to execute the application.

Continuous Delivery

Continuous Delivery (CD) enables teams to produce software in short cycles, ensuring the rapid and reliable delivery of software at any time in a low-risk manner.Continuous delivery extends continuous integration practices by adding a fully automated suite of tests, including acceptance tests. Also, an automated deployment pipeline executes bythe click of a button -or a programmatic trigger. Continuous delivery processes are automated, and supporting tools are integrated to minimize manual steps and human intervention.

Continuous Integration

Continuous Integration enables software changes to be pushed to production repeatedly and reliably. In a continuously integrated environment, developers merge changes to the main source branch often. When these changes are detected, a CI server automatically checks out the main branch and runs automated integration tests against the main branch.

Converged infrastructure

Converged infrastructure bundles the basic components into integrated and optimized bundles. Such as servers, data storage, networking equipment, and the virtualization and other software for infrastructure management. Converged infrastructure is delivered on-site which is already configured and tested. Converged infrastructure solutions are easily installed and maintained. Computing resources in converged infrastructure pools applications and other workloads. It is also optimized with the help of a hypervisor and other control software, it is more similar to virtualization.

Corrective control:

Corrective control: The goal is to reduce the after-effects of an attack by restoring the system to its expected state. They are used during or after an attack has been detected. Example: Data restore from backup.

DELL EMC Avamar

DELL EMC Avamar is a disk-based backup and recovery solution that provides inherent source-based data deduplication. With its unique global data deduplication feature, Avamar differs from traditional backup and recovery solutions by identifying and storing only unique sub-file data. DELL EMC Avamar provides a variety of options for backup, including guest OS-level backup and image-level backup. The three major components of an Avamar system include Avamar server, Avamar backup clients, and Avamar administrator. Avamar server provides the essential processes and services required for client access and remote system administration. The Avamar client software runs on each compute system that is being backed up. Avamar administrator is a user management console application that is used to remotely administer an Avamar system.

DELL EMC CloudBoost

DELL EMC CloudBoostcloud-enables DELL EMC Data Protection Suite family.Protects data wherever it lives: in-cloud, direct to cloud, and cloud as long-term retention (LTR). Fast and efficient: Source-side deduplication, compression, and WAN optimization boost performance and throughput while reducing the consumption and cost of network bandwidth and cloud storage capacity.Cost-effective and flexible: CloudBoostis available as a virtual, physical, or native public cloud appliance to fit every environment and supports leading public and private object storage clouds. Secure: CloudBoostdelivers enterprise-grade security even when data is stored or transferred outside your firewall. Data is segmented and encrypted in flight and at rest at all times, and all data transfers occur over HTTPS. Add an DELL EMC Elastic Cloud Storage (ECS) private object store or VirtustreamStorage Cloud (VSC) public cloud for even greater control.Scalable: A single CloudBoostinstance can manage up to 6PB of data in the cloud. Need more capacity? Just add more CloudBoostinstances.

DELL EMC Cloudboost:

DELL EMC Cloudboost: Backup on-premises and replicate to the cloud for scenarios where you have existing on-premises infrastructure and wish to use public or private object storage instead of tape for long term retention. Backup copies required for short term operational recovery remain on-premises for fast restore. Optionally, a disaster recovery site can be established for contingency purposes.

DELL EMC Data Domain

DELL EMC Data Domain deduplication storage system is a target-based data deduplication solution. Using high-speed, inline deduplication technology, the Data Domain system provides a storage footprint that is significantly smaller on average than that of the original data set. Data Domain Data Invulnerability Architecture provides defense against data integrity issues. DELL EMC Data Domain Boost software significantly increases backup performance by distributing the parts of the deduplication process to the backup server. With Data Domain Boost, only unique, compressed data segments are sent to a Data Domain system. For archiving and compliance solutions, Data Domain systems allow customers to cost-effectively archive non-changing data while keeping it online for fast, reliable access and recovery. DELL EMC Data Domain Extended Retention is a solution for long-term retention of backup data. It is designed with an internal tieringapproach to enable cost-effective, long-term retention of data on disk by implementing deduplication technology. Data Domain provides secure multi-tenancy that enables data protection-as-a-service for large enterprises and service providers who are looking to offer services based on Data Domain in a private or public cloud. With secure multi-tenancy, a Data Domain system will logically isolate tenant data, ensuring that each tenant's data is only visible and accessible to them. DELL EMC Data Domain Replicator software transfers only the deduplicatedand compressed unique changes across any IP network, requiring a fraction of the bandwidth, time, and cost, compared to traditional replication methods.

DELL EMC Isiloncloudpools-

DELL EMC Isiloncloudpools-Extend the data lake to the cloud with transparent tieringof inactive data. For Disaster Recovery purposes, Cloudpoolsseamlessly integrates with data replication using IsilonSynciq. SyncIQis SmartLinkaware and will replicate the SmartLinkfile to the destination cluster. During a failover scenario, the target cluster is connected to the cloud and the users will have seamless access to on-premise and tiered files.

DELL EMC Mozy

DELL EMC Mozy is a solution that provides a secure cloud-based online backup and recovery through Software as a Service. Mozy provides protection against risks like file corruption, unintended deletion, and hardware failure for compute and mobile systems. It is built on highly scalable and available back-end storage architecture. Mozy'sweb-based management console enables consumers to specify the data to be backed up and when to perform backups. Backups are encrypted and may be automatic or scheduled periodically. The three main products of Mozy are MozyHome, MozyPro, and MozyEnterprise. MozyHome is suitable for individual consumer, MozyProis suitable for small businesses, and MozyEnterpriseis suitable for larger organizations. Consumers does not need to purchaseany new hardwareto work withMozyandrequires minimal IT resources tomanage it. Mozy services are available at a monthly subscription fees to the consumers.

DELL EMC NetWorker

DELL EMC NetWorker is a backup and recovery software which centralizes, automates, and accelerates data backup and recovery operations. Following are the key features of NetWorker: •Supports heterogeneous platforms such as Windows, UNIX, Linux, and also virtual environments •Supports different backup targets -tapes, disks, Data Domain purpose-built backup appliance, and virtual tapes •Supports multiplexing (or multi-streaming) of data •Provides both source-based and target-based deduplication capabilities by integrating with DELL EMC Avamar and DELLEMC Data Domain respectively •The cloud-backup option in NetWorker enables backing up data to public cloud configurations

DELL EMC ProtectPoint

DELL EMC ProtectPoint is a data protection solution which enables direct backup from primary storage (DELL EMC VMAX) to Data Domain system. It eliminates the backup impact on application server. Unlike a traditional backup, ProtectPoint will only pause the application to mark the point-in-time for an application consistent backup, and then the application can quickly return to normal operations. By leveraging the primary storage change block tracking technology, only unique blocks are sent from primary storage, but are stored as full independent backups on Data Domain system.

Data shredding is

Data shredding is the process of deleting data or residual representations, sometimes called remanence, of data and making it unrecoverable. The threat of unauthorized data recovery is greater in the cloud environment as consumers do not have control over cloud resources. After consumers discontinue the cloud service, their data or residual representations may still reside in the cloud infrastructure. An attacker may perform unauthorized recovery of consumers' data to gain confidential information.

DELL EMC RecoverPoint

DELL EMC RecoverPoint is a high-performance, single product that provides both local and remote continuous data protection. RecoverPoint family includes RecoverPointand RecoverPoint for VMs. RecoverPoint provides fast recovery of data and enables users restoring the data to any previous point-in-time. RecoverPoint uses lightweight splitting technology to mirror a write. RecoverPointintegrated WAN bandwidth reduction technology uses compression to optimize network resource utilization during remote replication. RecoverPoint for VMs is a hypervisor-based software data protection tool that protects VMware VMs with VM level granularity. It protects VMs with its built-in automated provisioning and orchestration capabilities for disaster and operational recovery and is fully integrated with VMware vCenterthrough a plug-in. It provides local and remote replication over any distance with synchronous or asynchronous replication.

DELL EMC SourceOne

DELL EMC SourceOne is a family of archiving software. It helps organizations to reduce the burden of aging emails, files, and Microsoft SharePoint content by archiving them to the appropriate storage tier. SourceOne helps in meetingthecompliance requirements by managing emails, files, and SharePoint content as business records and enforcing retention/disposition policies. The SourceOne family of products includes: •DELL EMC SourceOne Email Management for archiving email messages and other items. •DELL EMC SourceOne for Microsoft SharePoint for archiving SharePoint content. •DELL EMC SourceOne for File Systems for archiving files from file servers. •DELL EMC SourceOne Discovery Manager for discovering, collecting, preserving, reviewing, and exporting relevant content. •DELL EMC SourceOne Email Supervisor for monitoring corporate email policy compliance.

DELL EMC Spanning

DELL EMC Spanning is an DELL EMC company and a leading provider of backup and recovery for SaaS applications that helps organizations to protect and manage their information in the cloud. Spanning solutions provide powerful, enterprise-class data protection for Google Apps, Salesforce, and Office 365. Spanning backup is the most reliable cloud-to-cloud backup solution for organizations. It allows administrators and end users to search for, restore, and export data.

6 Challenges of IT to meet business demand

Data Growth Aging Technology Growth Poor Scale, performance, security Shadow IT Financial Pressure.

Data archiving

Data archiving is the process of moving fixed data that is no longer actively accessed to a separate lower cost archive storage system for long term retention and future reference. With archiving, the capacity on expensive primary storage can be reclaimed by moving infrequently-accessed data to lower-cost archive storage. Archiving fixed data before taking backup helps to reduce the backup window and backup storage acquisition costs. Data archiving helps in preserving data that may be needed for future reference and data that must be retained for regulatory compliance. For example, new product innovation can be fostered if engineers can access archived project materials such as designs, test results, and requirement documents. Similarly,both active and archived data can help data scientists drive new innovations or help to improve current business processes. In addition, government regulations and legal/contractual obligations mandate organizations to retain their data for an extended period of time.

Data breach

Data breach is an incident in which an unauthorizedentity, attacker, gains access to a cloud consumer's confidential data stored on the cloud infrastructure.

Data encryption

Data encryption is a cryptographic technique in which data is encoded and made unreadable to eavesdroppers or hackers. Data encryption is one of the most important mechanisms for securing data in-flight and at-rest. Data in-flightencryptionrefers to the process of encryptingthedata that is being transferred over a network. Data in-flight encryption is performedat the network level, when the customers movethe data to the cloud or tries to access the data from the cloud.Data at-rest encryption refers to theprocess of encrypting thedata that is stored on a storage device. Itis performed at storage level,that is beforemoving the data to the cloud storage. This type of encryption helps to protect the data to meet the customer organizational security and compliance requirements.Typically the cloud service provider offersthe encryption service.

Data Loss

Data loss can occur in the cloud due to various reasons other than malicious attacks. Some of the causes of data loss may include accidental deletion by the provider or destruction resulting from natural disasters. The provider is often responsible for data loss resulting from these causes and appropriate measures such as data backup can reduce the impact of such events. Providers must publish the protection mechanisms deployed to protect the data stored in cloud.

Data Deduplication

Deduplication is the process of detecting and identifying the unique data segments (chunk) within a given set of data to eliminate redundancy. The use of deduplication techniques significantly reduces the amount of data to be backed up. Data deduplication operates by segmenting a dataset into blocks and identifying redundant data and writing the unique blocks to a backup target. To identify redundant blocks, the data deduplication system creates a hash value or digital signature—like a fingerprint—for each data block and an index of the signatures for a given repository. The index provides the reference list to determine whether blocks already exist in a repository. When the data deduplication system sees a block it has processed before, instead of storing the block again, it inserts a pointer to the original block in the repository. It is important to note that the data deduplication can be performed in backup as well as inproduction environment. In production environment, the deduplication is implemented at primary storage systems to eliminate redundant data in the production volume. The effectiveness of data deduplication is expressed as a deduplication ratio, denoting the ratio of data before deduplication to the amount of data after deduplication. This ratio is typically depicted as "ratio:1" or "ratio X", (10:1 or 10 X). For example, if 200 GB of data consumes 20 GB of storage capacity after data deduplication, the space reduction ratio is 10:1. Every data deduplication vendor claims that their product offers a certain ratio of data reduction. However, the actual data deduplication ratio varies, based on many factors. These factors are discussed next.

Dell Boomi

Dell Boomi is the leading integration cloud, connects any combination of cloud and on-premiseapplications without software or appliances. It connects the industry's largest application network using one seamless and self-service platform. This platform allows businesses of all sizes, IT resources, and budgets to synchronize data between their mission critical applications. And without the costs associated with acquiring or maintaining software, appliances, or custom code. Dell BoomiAtomSphereis a single-instance, multitenant integration platform as a service(iPaaS). It provides a unified online development and management environment that easily and cost effectively supports all your application integration needs across a hybrid IT infrastructure.

Dell EMC CloudArray

Dell EMC CloudArray is the cloud-integrated storage that extends high-performance storage arrays with cost-effective cloud capacity. By providing access to a private or public cloud storage tier through standard interfaces, the CloudArray technology simplifies storage management for inactive data and offsite protection. The approach to storage management enables the customers to scale their storage area networks (SAN) and network-attached storage (NAS) with on-demand cloud storage. It provides a permanent solution for block and file capacity expansion, offsite storage, and data archiving. CloudArray's software-defined storage modernizes the customers existing storage strategy by allowing them to: •Instantly access a new tier of cloud-integrated storage with the look, feel, and performance of local storage. •Meet expanding on premise and off-premise storage demands without relying on tape. •Automatically store data to a choice of private and public clouds. •Deduplication and compression reduce storage requirements by up to 10x. •Storage solution that is affordable and maintenance-free.

Dell EMC CloudLink

Dell EMC CloudLinkprovides keymanagement and data at-rest encryption mechanism for private, public,and hybrid cloud. CloudLink is a Key Management Interoperability Protocol (KMIP) compliant key management server (KMS), which allows it to manage keys for various encryption end-points. CloudLink SecureVMagent controls, monitors, and secures the Windows and Linux VMs in the hybrid cloud. CloudLinkis certified by world-recognized leaders in cloud technologies like AWS, EMC, RSA, VCEand VMware.

Dell EMC Data Protection Advisor

Dell EMC Data Protection Advisor automates and centralizes the collection and analysis of alldata in multiple data centers. It presents a single, comprehensive view of the data protection environment and activities. It provides automated monitoring, analysis, and reporting across the backup and recovery infrastructure, replication technologies, storage platforms, enterprise applications, and virtual environment. It helps to more effectively manage service levels while reducing costs and complexity. •Reduce cost -proactive monitoring and alerting of issues before impact. •Lower complexity -real-time protection assurance, scalability,and resource optimization. •Increase visibility and insight -provides the right information to the right people at the right time in the right format. •Automate -polity-based protection reduces risk and increases operational effectiveness. •Complying with audits -single click verification of service levels and recoverability.

Dell EMC PowerEdge servers

Dell EMCPowerEdge servers : As the foundation for a complete, adaptive and scalable solution, the 13th generation of Dell EMC PowerEdge servers delivers outstanding operational efficiency and top performance at any scale.It increasesproductivity with processing power, exceptional memory capacity, and highly scalable internal storage. PowerEdge provideinsight from data, environment virtualization, and enablea mobile workforce. Major benefits of PowerEdge Servers are: •Scalable Business Architecture: Maximizes performance across the widest range of applications with highly scalable architectures and flexible internal storage. •Intelligent Automation: Automates the entire server lifecycle from deployment to retirement with embedded intelligence that dramatically increases productivity. •Integrated Security: Protects customers and business with a deep layer of defense built into the hardware and firmware of every server.

Dell EMC Elastic Cloud Storage (ECS)

Dell EMC Elastic Cloud Storage (ECS) is software-defined object storage designed for both traditional and next-generation workloads with high scalability, flexibility, and resiliency. ECS provides significant value for enterprises and service providers seeking a platform architected to support rapid data growth. The features of ECS that enable enterprises to globally manage and store distributed content at scale include: •Flexible Deployment -ECS has unmatched flexibility to deploy as an appliance, software-only solution, or in the cloud. •Enterprise Grade -ECS provides customers more control of their data assets with enterprise class object, file, and HDFS storage in a secure and compliant system. •TCO Reduction -ECS can dramatically reduce TCO relative to traditional storage and public cloud storage. It even offers a lower TCO than Tape for long-term retention. •The primary use cases of ECS are: •Geo Protected Archive -ECS serves as a secure and affordable on-premisecloud for archival and long-term retention purposes. Using ECS as an archive tier can significantly reduce primary storage capacities. •Global Content Repository -ECS enables any organization to consolidate multiple storage systems into a single, globally accessible, and efficient content repository. •Cloud Backup -ECS can be used as a cloud target backup for customer's primary data. For instance, utilizing CloudPoolsto tier data from Isilonto ECS. Third party cloud backup solutions can also typically be redirected to ECS as the cloud backup target.

Dell EMC OpenManage systems

Dell EMC OpenManage systems management solutions drive operational efficiencies that help achieve dramatic improvements in the productivity and agility of the IT environment. It simplifies, automates,and optimizes IT operations. Key benefits include: •Maintain IT flexibility for rapid response to dynamic business needs. •Drive operational efficiencies that reduce cost. •Achieve dramatic improvements in IT productivity and agility. Featuresinclude: •Deploy 100 or more servers in less than 1 second. •Connect to the server using nothing but a phone or tablet. •Auto update the firmware during off-hours without having to be in the office. •Get the latest firmware and driver updates automatically stored in to the repository without having to lift a finger.

Dell EMC Scale IO

Dell EMC Scale IO software is used to deploy software-defined block storage. It applies the principles of server virtualization-abstract, pool, and automate, to standard x86 server direct-attached storage (DAS). It creates a simplified SDS infrastructure that enables easier storage lifecycle management and provisioning. ScaleIOcan lower storage TCO by 50%, accelerate storage deployment by 83%, and enable 32% faster application deployment. The features of the ScaleIOsoftware are: •Deploys as all-flash and/or hybrid software-defined block storage. •Enables storage tieringacross server-based HDDs, SSDs, and the PCIeflash cards. •Ensures high availability with two copy data mirroring, self-healing, and rebalancing. •Supports multitenancy storage with data at rest encryption. •Superior data recovery with consistent, writeable snapshots.

Dell EMC ViPR SRM

Dell EMC ViPRSRM is a comprehensive monitoringand reporting solution that helps IT visualize, analyze, and optimize today's storage investments. It provides a management framework that supports investments in software-defined storage. •Visualize -providesdetailed relationship and topology views from the application, to the virtual or physical host, down to the LUN to identify service dependencies. And also provides a view of performance trends across the data path and identify hosts competing for resources. •Analyze -helps to analyze health, configurations,and capacity growth. Analysis helps to spot SLA problems through custom dashboards and reports that meet the needs of a wide range of users and roles. ViPRSRM helps to track block, file,and object capacity consumption across the data centers with built-in views. The views help to understand who is using capacity, how much they are using, and when more are required. •Optimize -ViPRSRM helps to optimize capacity and improve productivity to get the most out of the investments in block, file,and object storage. It shows historical workloads and response times to determine if you have selected the right storage tier. It tracks capacity usage, allowing to create show back or chargeback reports to align application requirements with costs.

Dell EMC VxBlock Systems

Dell EMC VxBlock Systemssimplifies all aspects of IT and enables customers to modernize their infrastructure and achieve better business outcomes faster. By seamlessly integrating enterprise-class compute, network, storage, and virtualization technologies, it delivers most advanced converged infrastructure.It is designed to support large-scale consolidation, peak performance, and high availability for traditional and cloud-based workloads. It is a converged system optimized for data reduction and copy data management. Customers can quickly deploy, easily scale, and manage your systems simply and effectively. Deliver on both midrange and enterprise requirements with the all-flash design, enterprise features, and support for a broad spectrum of general-purpose workloads.

Dell EMC VxRail Appliances,

Dell EMC VxRail Appliances, the fastest growing hyper-converged systems worldwide. They are the standard for transforming VMware infrastructures, dramatically simplifying IT operations while lowering overall capital and operational costs. The features of VxRail Appliances are: •Accelerates transformation and reduces risk with automated lifecycle management. For example, user have to perform one-click for software and firmware updates after deployment. •Drives operational efficiency for a 30% TCO advantage versus HCI systems built using VSAN Ready Nodes. •Unifies support for all VxRail hardware and software delivering 42% lower total cost of serviceability. •Engineered, manufactured, managed, supported, and sustained as ONE for single end-to-end lifecycle support. •Fully loaded with enterprise data services for built-in data protection, cloud storage, and disaster recovery.

Dell EMC XC SeriesAppliance

Dell EMC XC Series Appliance is a hyper-converged appliance. It integrates with the Dell EMC PowerEdge servers, the Nutanix software, and a choice of hypervisors to run any virtualized workload. It is ideal for enterprise business applications, server virtualization, hybrid or private cloud projects, and virtual desktop infrastructure (VDI). User can deploy an XC Series cluster in 30 minutes and manage it without specialized IT resources. The XC Series makes managing infrastructure efficient with a unified HTML5-based management interface, enterprise-class data management capabilities, cloud integration, and comprehensive diagnostics and analytics. The features of Dell EMC XC Series are: •Available in flexible combinations of CPU, memory, and SSD/HDD •Includes thin provisioning and cloning, replication, and tiering •Dell EMC validates, tests, and supports globally •Able to grow one node at a time with nondisruptive, scale-out expansion

Dell InTrust

Dell InTrustis an IT data analytics solution that providestheorganizations the power to search and analyze vast amounts of data in one place. It provides real-time insights into user activity across security, compliance, and operational teams. It helpsthe administrators to troubleshoot the issues by conducting security investigations regardless of how and where the data is stored. It helps the compliance officers to produce reports validating the compliance across multiple systems. This web interface quickly provides information on who accessed the data, how was it obtained and how the data was used. This helps the administrators and security teams to discover the suspicious event trends.

DellChange Auditor

DellChange Auditor helps customer audit, alert, protect and reports user activity and configuration and application changes against Active Directory andWindows applications. The software has role-based access, enabling auditors to have access to only the information they need to quickly perform their job.Change Auditor provides visibility into enterprise-wide activities from one central console, enabling customers to see how data is being handled.

DevOps

DevOps is a methodology where the software developers and IT operations staff collaborate in the entire software lifecycle. The collaboration starts from the design phase through the development and production phases.

Why application transformation?

Digital Business Drivers Challenges of traditional applications Shift IT spends toward innovation

5 type of businesses and their digital paths.

Digital Leaders: 5% -digital transformation, in its various forms, is ingrained in the DNA of the business Digital Adopters: 14% -have a mature digital plan, investments, and innovations in plac e Digital Evaluators: 34% -cautiously and gradually embracing digital transformation, planning and investing for the future Digital Followers: 32% -few digital investments; tentatively starting to plan for the future Digital Laggards: 15% -do not have a digital plan, limited initiatives, and investments in place

What is Digital Transformation

Digital Transformation puts technology at the heart of an organization's products, services and operations -to help accelerate the business and competitively differentiate itself -in order to improve the experience for its customers. This is achieved through the use of smarter products, data analytics and continuous improvement of products and services through the use of software.

Digital signatures certificates

Digital signatures certificates offer strong security capabilities to authenticate the users, trying to access the cloud-based APIs. Digital signatures use public key algorithm to generate two keys known as public key and private key. When a user requests an API, the user has to electronically sign that request. This signature is used to generate a hash value using hash algorithm. Then this hash value is encrypted with a private key to create a digital signature. This digital signature is sent to the cloud service provider to verify the identity of the user. The provider runs the hash algorithm to obtain the hash value and decrypts that value with a public key and matches the output. If the hash value is same, then the user is authenticated to access the requested API. If it does not match, the signature is considered as invalid.

Disaster Recovery as a Service (DRaaS)

Disaster Recovery as a Service (DRaaS) has emerged as a solution to strengthen the portfolio of a cloud service provider, while offering a viable DR solution to consumer organizations. The cloud service provider assumes the responsibility for providing resources to enable organizations to continue running their IT services in the event of a disaster. From a consumer's perspective, having a DR site in the cloud reduces the need for data center space and IT infrastructure, which leads to significant cost reduction, and eliminates the need for upfront capital expenditure. Resources at the service provider can be dedicated to the consumer or they can be shared. The service provider should design, implement, and document a DRaaSsolution specific to the customer's infrastructure. They must conduct an initial recovery test with the consumer to validate complete understanding of the requirements and documentation of the correct, expected recovery procedures.

Denial of Service (DoS) attack.

DoSattacks can be targeted against compute systems, networks, or storage resources.In all cases, the intent of DoSis to exhaust key resources, such as network bandwidth or CPU cycles,preventing production use. For example, an attacker may send massive quantities of data to the target with the intention of consuming bandwidth. This condition prevents legitimate consumers from using the bandwidth. Such an attack may also be carried out by exploiting weaknesses of a communications protocol. Consider an example of the DoSattack inthe cloud environment, where consumers are billed based on resource utilization. A network-based DoSattack is conducted to prevent the consumer from using cloud services. The attacker can use this method to consume CPU cycles and storage capacity, resulting in nonproductive expenses for the legitimate consumer.

Cloud Service LifeCycle - Service Planning (phase 1)

During service planning, making business case decisions for the cloud service offering portfolio is key -regardless of whether those services will be supported by an internal IT organization (generally referred to as IT as a Service) or existing or new service providers who are looking to add to their portfolio of offerings. Service planning considers business requirements, market conditions and trends, competitive factors, required service quality and attributes, and the addition of new capabilities when required.

Erasure coding

Erasure coding is a technique that provides space-optimal data redundancy to protect data loss against multiple drive or node failures. In general, erasure coding technique breaks the data into fragments, encoded with redundant data and stored across a set of different locations, such as drives or nodes. In a typical erasure coded storage system, a set of n disks is divided into m disks to hold data and k disks to hold coding information, where n, m, and k are integers. The coding information is calculated from the data. If up to k of the n disks fail, their contents can be recomputed from the surviving disks.

European Network and Information Security Agency (ENISA)

European Network and Information Security Agency (ENISA) is a European Union agency that provides standardsrelated to network and information security issues. It contributes to the development of a culture of network and information security for the benefit of the consumers, enterprises, and public sector organizations of the European Union.

What is XaaS?

Everything as a Service. Umbrella term that includes Saas, Paas, IaaS.

Flash:

Flash delivers the low-latency performance required for modern applications and increased performance for traditional applications with better economics than disk drives. Data dense, highly performing flash storage reduces the cost of delivering consistent performance while reducing the number of drives required.

Fault tolerance

Fault tolerance is the ability of a system to continue functioningwithin a fault situation, or failure of some of its components. These failures could be the components of the cloud infrastructure. A fault may cause a complete outage of a hardware component or cause a faulty component to run but only to produce incorrect or degraded output. The common reasons for a fault or a failure are: hardware failure, software bug, and administrator/user errors. Fault tolerance ensures that a single fault or failure does not make an entire system or a service unavailable. It protects a system or a service against three types of unavailability—transient, intermittent, and permanent.

Firewall

Firewallis a device that monitors the incoming and the outgoing network traffic and filters the traffic based on the defined set of security rules. They establish a barrier between the secured internal network and the unsecured outside network such as Internet. Thefirewalls are implemented in a data center to secure the internal resources.

4 Modern IT Infrastructure pieces

Flash Scale out Software-defined Cloud Enabled

Functional interface

Functional interface: It enables a consumer to use service functions. Functional interface presents the functional content of a service instance that enables consumers to perform computing activities. Similar to the management interface, the functional content is also presented differently depending on the service model. They are as follows: IaaS:The functional interface is the specifics of hardware such as processors, memory, network adapters, and storage volumes typically used by an OS. •PaaS:The functional interface may provide an integrated development environment (IDE) that consists of programming interface, libraries, and tools to develop and deploy applications. This could be offered in proprietary or commercially available coding languages. The environment enables consumers to code, compile, and run their applications on the cloud platform. The development environment may also be offered using a software development kit (SDK) that aconsumer can download and use for application development. After development, the consumer can upload the application onto the cloud platform. •SaaS:The graphical user interface (GUI) of a business application offered as a service is an example of functional interface.

Governance determines

Governance determines the purpose, strategy, and operational rules by which companies are directed and managed. Regarding cloud computing, governance defines the policies and the principles of the provider that the customers must evaluate for the use of cloud services. The goal of the governance is to secure the cloud infrastructure, applications, and the data by establishing the contract between the cloud provider and the customer. It is also important to establish the contract between the provider and their supporting third party vendors. Because, the effectiveness of governance processes may be diminished when the cloud provider outsources some or all its services to third-parties, including cloud brokers. In such a scenario, provider may not have control over many services that have been outsourced and therefore may impact the commitments given by the provider. Also, the security controls of the third party may change, which may impact the terms and conditions of the provider.

Health Insurance Portability and Accountability Act (HIPAA)

Health Insurance Portability and Accountability Act (HIPAA) is the U.S. law that regulates an individual's healthcare information. HIPAA has two rules: •Privacy rule: requires that data is kept encrypted when in transit and at rest. •Security rule: requires that stringent controls are in place for access to data. It requires logging for audits—for example, who accesses what data and why. Also, backups require controls on the backup mediums and restoral procedures.

Hyper-converged infrastructure

Hyper-converged infrastructure preconfigures and manufactures compute, storage, and virtualization as a single appliance that delivers a validated solution with greater performance, simplicity, and scalability. The various components are architected on customers behalf so that the customers either can 'buy build'.It allows the customers to focus on benefiting the business instead of configuring and managing the underlying infrastructure. Hyper-converged infrastructure appliances extend the benefits of the modern enterprise data center to departmental, enterprise-edge, and regional offices. The technology makes it simple to acquire, deploy, and manage IT infrastructure and workloads.Hyper converged infrastructure appliances use a software-defined infrastructure on top of hardware components. Unlike a converged infrastructure, a hyper converged infrastructure provides tighter integration between infrastructure resources through software.

Hypervisor hardening

Hypervisor is a fundamental component of virtualization. It is a natural target for the attackers as it controls all the virtual resources and their operations running on the physical machine. Compromising a hypervisor places all the virtual resources at a high risk. Hypervisor hardening should be performed, using specifications provided by organizations such as the Center for Internet Security (CIS) and Defense Information Systems Agency (DISA). Hypervisor hardening includes separating a management network from the VM network so that the risks in the management servers do not affect the VM network. It is also important to install the security critical hypervisor updates when it is released by the vendor. Access to the management server which manages all the hypervisors in the cloud environment should be restricted to authorized administrators and access to core levels of functionality should be restricted to selected administrators. Not just the administrators but also the service accounts used by the applications or services to interact with the hypervisor or management servers should be given least privileges. Hypervisor security can be increased by disabling the services like SSH and remote access which is not used in everyday operations.

Hypervisor

Hypervisor is compute virtualization software that is installed on a compute system. It provides a virtualization layer that abstracts the processor, memory, network, and storage of the compute system and enables the creation of multiple virtual machines. Each VM runs its own OS, which essentially enables multiple operating systems to run concurrently on the same physical compute system. The hypervisor provides standardized hardware resources to all the VMs.

IDPS

IDPS plays an importantrole in securing the cloud infrastructure as the cloud capability offers a distributed nature enabling the customers to operate from various locations. Intrusiondetection system (IDS) is a security tool, that detects the events that can cause exploitation of vulnerabilities in the cloud provider's network or server. The goal of this system is to identifying the unusual activities and generate alerts to notify the cloud service provider by residing on the cloud server. Intrusion prevention system (IPS)is also a security tool that prevents the events after they have been detected by the IDS. IPSmonitors the host and the network andbased on predefined set of rules.These two mechanisms usually work together and are referred to as intrusion detection and prevention system (IDPS). IDS can be classified into three methods: 1.Signature-based intrusion detection compares the signature against observed events by relying on a database that contains known attack patterns, or signature. Thismethod is useful only for known threats. 2.Anomaly-based intrusion detection comparesthe observed events with the normal activities to identify the abnormal patterns. This method is useful to spot the unknown threats. 3.Laser-based intrusion detection uses laser light source that produces narrow and invisible light beam to guard the physical premises.

IT Transformation,

IT Transformation, similar to when anorganization adopts virtualization, can have many benefits that far outweigh disadvantages, but inefficienciescan still exist. Technology changes are often viewed as the way to fix these problems. But sometimes there are other changes outside of technology that need to be addressed, such as changes to organizational or business processes. Some literature categorizes these changes as People, Process, and Technology.

Identity and Access Management(IAM)

Identity and Access Management(IAM) is the process of identifying who the users are(authentication), and what access do they have(authorization) for which cloud resources. In today's cloud environment,the customers organization may collaborate with multiple cloud service providers to access various cloud-based applications.So, IAM security practice is a crucial undertaking for all the cloud service providers to protect their resources and applications. The most commonly used authentication and authorization mechanisms to provide IAMin cloud include multifactor authentication, digital signatures, and role-based access control.

Image Level Backup

Image-level backup makes a copy of the virtual disk and configuration associated with a particular VM. The backup is saved as a single entity called as VM image. This type of backup is suitable for restoring an entire VM in the event of a hardware failure or human error such as the accidental deletion of the VM. Image-level backup also supports file-level recovery. In an image-level backup, the backup software can backup VMs without installing backup agents inside the VMs or at the hypervisor-level. The backup processing is performed by a proxy server that acts as the backup client, thereby offloading the backup processing from the VMs. The proxy server communicates to the management server responsible for managing the virtualized compute environment. It sends commands to create a snapshot of the VM to be backed up and to mount the snapshot to the proxy server. A snapshot captures the configuration and virtual disk data of the target VM and provides a point-in-time view of the VM. The proxy server then performs backup by using the snapshot. The figure on the slide illustrates image-level backup.

Refactor:

Improve the internal structure of an application with code and data portability, modularity, and cloud-native features, without changing the external behavior. Example of application transformation with refactoring is converting monoliths into microservicesfor containers and the DevOps readiness

Why do we need data deduplication

In a data center environment, a high percentage of data that is retained on a backup media is redundant. The typical backup process for most organizations consists of a series of daily incremental backups and weekly full backups. Daily backups are usually retained for a few weeks and weekly full backups are retained for several months. Because of this process, multiple copies of identical or slowly-changing data are retained on backup media, leading to a high level of data redundancy. A large number of operating systems, application files and data files are common across multiple systems in a data center environment. Identical files such as Word documents, PowerPoint presentations and Excel spreadsheets, are stored by many users across an environment. Backups of these systems will contain a large number of identical files. Additionally, many users keep multiple versions of files that they are currently working on. Many of these files differ only slightly from other versions, but are seen by backup applications as new data that must be protected (as shown on the slide). Due to this redundant data, the organizations are facing many challenges. Backing up redundantdata increases the amount of storage needed to protect the data and subsequently increases the storageinfrastructure cost. It is important for organizations to protect the data within the limited budget. Organizations are running out of backup window time and facing difficulties meeting recovery objectives. Backing up large amount of duplicate data at the remote site or cloud for DR purpose is also very cumbersome and requires lots of bandwidth.

stateful application model

In a stateful application model, the session state information -for example user ID, selected products in a shopping cart, and so on, is stored in compute system memory. However, the information stored in the memory can be lost if there is an outage with the compute system where the application runs. In a persistent state model, the state information is stored out of the memory and is stored in a repository. If a compute system running the application instance fails, the state information will still be available in the repository. A new application instance is created on another server which can access the state information from the database and resume the processing.

statelessapplication model,

In a statelessapplication model, the server does not store any session state information about the client session. However, the session information data is stored on the client and passed to the server when the session data is required. Often message queuing are used to buffer requests and load by putting them in a serial or parallel queue, so that the process that is going to service them does not get over drawn and blocked. Message queuing which helps to improve availability by storing the messages in the repositories. In the event of failure the messages are persisted from the repository.

best-of-breedcloud infrastructure solution,

In an integrated best-of-breedcloud infrastructure solution, organizationshave the flexibility to use and integrate the infrastructure components from differentvendors. This solution allows organizations to design their cloud infrastructure byrepurposing their existing infrastructure components (in a brownfield deployment option),providing a cost effective solution. This solution enables organizations to select avendor of their choice for installing the infrastructure components. This solution also enables an organization toeasily switch a vendor if the vendor is unable to provide the committed support and not meet theSLAs.

Flexibility of access

In cloud computing, applications and data reside centrally and can be accessed from anywhere over a network from any device such as desktop, mobile, and thin client. This eliminates a consumer's dependency on a specific end-point device. This also enables Bring Your Own Device (BYOD), which is a recent trend in computing, whereby employees are allowed to use non-company devices as business machines.

Cloud-only Archiving

In cloud-only archiving option, the organization's inactive data (bothcritical and non-critical) that meets the organization's archiving policies is archived to the cloud. The organization can either choose IaaSor SaaS archiving service. In IaaS, the organization will have the archiving server on its data center and the archiving storage will reside on the cloud. The slide illustratesthe cloud-only IaaS archiving option. In SaaS, both the archiving server and the archiving storage reside on cloud infrastructure. In SaaS,only the archiving agent runs on the data center.

post-processing deduplication

In post-processing deduplication, the backup data is first stored to the disk in its native backup format and deduplicated after the backup is complete. In this approach, the deduplication process is separated from the backup process and the deduplication happens outside the backup window. However, the full backup data set is transmitted across the network to the storage target before the redundancies are eliminated. So, this approach requires adequate storage capacity to accommodate the full backup data set. Service providers can consider implementing target-based deduplication when their backup application does not have built-in deduplication capabilities. It supports the current backup environment without any operational changes. Target-based deduplication reduces the amount of storage required, but unlike source-based deduplication, it does not reduce the amount of data sent across a network during the backup. In some implementations, part of the deduplication functionality is moved to the backup client or backup server. This reduces the burden on the target backup device for performing deduplication and improves the overall backup performance.

IaaS model responsibility

In the IaaS model, the cloud provider is responsible for securing and managing the underlying cloud infrastructure that includes compute, network, storage, and virtualization technologies. On the other hand, the consumer is responsible for securing the OS, database, applications, and data.

PaaS model responsibility

In the PaaS model, the cloud provider is responsible for security, management, and the control of underlying infrastructure components as well as the platform software including OS, programming framework, and middleware. The consumer is only responsible for security and control of applications and data.

SaaS model responsibility

In the SaaS model, the cloud provider is responsible for security, management, and the control of entire cloud stack. And the consumer does not own or manage any aspect of the cloud infrastructure. The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The consumer's responsibility is limited to negotiating the service levels, policies, and compliance with the provider.

Agent Based Backup

In this approach, an agent or client is installed on a virtual machine (VM) or a physical compute system. The agent streams the backup data to the backup device as shown in the figure on the slide. This is a popular way to protect virtual machines due to the same workflow implemented for a physical machine. This means backup configurations and recovery options follow traditional methods that administrators are already familiar with. This approach allows to do a file-level backup and restoration. However, this backup approach doesn't capture virtual machine configuration files. This approach doesn't provide the ability to backup and restore the VM as a whole. The agent running on the compute system consumes CPU cycles and memory resources. If multiple VMs on a compute system are backed up simultaneously, then the combined I/O and bandwidth demands placed on the compute system by the various backup operations can deplete the compute system resources. This may impact the performance of the services or applications running on the VMs. To overcome these challenges, the backup process can be offloaded from the VMs to a proxy server. This can be achieved by using the image-based backup approach.

Infrastructure as Code (IaC)

Infrastructure as code,IaC, is the process of managing and provisioning IT resources using script files rather than manually configuring resources. Through Infrastructure as Code, organizations are able to define their desired state of infrastructure resources in code.

Instant VM recovery (Recovery-in-place)

Instant VM recovery (Recovery-in-place) is a term that refers to running a VM directly from the purpose-built backup appliance, using a backed up copy of the VM image instead of restoring that image file to the production system. In the meantime, the VM data is restored to the primary storage from the backup copy. Once the recovery has been completed, the workload is redirected to the original VM. One of the primary benefits of recovery-in-place mechanism is that it eliminates the need to transfer the image from the backup area to the primary storage (production) area before it is restarted; so the application that are running on those VMs can be accessed more quickly. It reduces the RTO.

TCO & ROI

Investment decisions are factored by total cost of ownership (TCO) and return on investment (ROI). •TCO estimates the full lifecycle cost of owning service assets. The cost includes capital expenditure (CAPEX), such as procurement and deployment costs of hardware. It also includes the on-going operational expenditure (OPEX), such as power, cooling, facility, and administration cost. CAPEX is typically associated with one-time or fixed costs. Recurring costs or variable costs are typically OPEX that may vary over the life of a service. •ROI is a measurement of the expected financial benefit of an investment. It is calculated as the gain from an investment minus the cost of the investment, and the whole is divided by the cost of the investment.

Permanent unavailability:

It exists until the faulty component is repaired or replaced. Examples of permanent unavailability are network link outage, application bugs, and manufacturing defects.

Cloud to cloud backup

It is important for organizations to protect their data, regardless of where it resides. When an organization uses SaaS-based applications, their data is stored on the cloud service provider's location. Typically, the service provider protects the data. But some of the service providers may not provide the required data protection. This imposes challenges to the organizations in recovering the data in the event of data loss. For example, the organization might want to recover a purged email from several months or years ago to be used as legal evidence. The service provider might be unable to help the organization to recover the data. Cloud-to-cloud backup allows consumers to backup cloud-hosted applications(SaaS) data to other cloud. As shown in the figure, cloud service provider 1 hosts the applications for a consumer organization. The cloudservice provider 2 backs up data from the location of service provider 1 to their data center.

Business continuity

It is possible for IT services to be rendered unavailable due to causes, such as natural disasters, human error, technical failures, and planned maintenance. The unavailability of IT services can lead to significant financial losses to organizations and may also affect their reputations. However, having a remote secondary site for disaster recovery involves more capital expenditure and administrative overheads. Through the use of cloud business continuity solutions, an organization can mitigate the impact of downtime and can recover from outages that adversely affect business operations. For example, an organization may use cloud-based backup for maintaining additional copies of their data, which can be retrieved in the event of an outage. Also, an organization can save onthe capital expenses required for implementing a backup solution for their IT infrastructure.

Disaster Recovery-DR:

It is the coordinated process of restoring IT infrastructure, including data that is required to support ongoing cloud services, after a natural or human-induced disaster occurs. The basic underlying concept of DR is to have a secondary data center or site at a preplanned level of operational readiness when an outage happens at the primary data center.

Compute-to-compute network:

It typically uses protocols based on the Internet Protocol (IP). Each physical compute system is connected to a network through one or more host interface devices, called a network interface controller (NIC). Physical switches and routers are the commonly used interconnecting devices. A switch enables different compute systems in the network to communicate with each other. A router is an OSI Layer-3 device that enables different networks to communicate with each other. The commonly used network cables are copper cables and optical fiber cables. The figure on the slide shows a network (LAN or WAN) that provides interconnections among the physical compute systems. It is necessary to ensure that appropriate switches and routers, with adequate bandwidth and ports, are available to provide the required network performance.

LUN (logical unit number)

LUN (logical unit number) is created by abstracting the identity and internal function of storage systems and appears as physical storage to the compute system. The mapping of virtual to physical storage is performed by the virtualization layer. LUNs are assigned to the compute system to create a file system for storing and managing files. In a shared environment, there may be a chance that this LUN can be accessed by an unauthorized compute system. LUN masking is a process that provides data access control by defining which LUNs a compute system can access. This ensures that volume access by compute systems is controlled appropriately, preventing unauthorized, or accidental access. In a cloud environment, the LUNs are created and assigned to different services based on the requirements. For example, if a consumer requires 500 GB of storage for their archival purpose, the service provider creates a 500 GB LUN and assigned it to the consumer. The storage capacity of a LUN can be dynamically expanded or reduced based on the requirements. A LUN can be created from a RAID set (traditional approach) or from a storage pool.

Local backup service

Local backup service-managed backup service:This option is suitable when a cloud service provider is already providing some form of cloud services (example: compute services, SaaS) to the consumers. The service provider may choose to offer backup services to the consumers, helping protect consumer's data that is being hosted in the cloud. In this approach, the backup operationis completely managed by the service provider.

Modern Applications described

Loosely coupled for easier updates Use dynamic modern infrastructure platform Services delivered in hours or days not weeks Scales horizontally, instantly and fault tolerant Long term tech commits reduced because easier to replace particular modules

Multi Cloud

Many organization have started adopting a multi-cloud approach to meet the business demands since no single cloud model can suit the varied requirements and workloads across the organizations. Some application workloads run better on one cloud platform while other workloads achieve higher performance and lower cost on another platform. By adopting a multi-cloud strategy, organizations are able to choose services from different cloud service providers to create the best possible solution for their business. The way to elude vendor lock-in is also a factor of multi-cloud adoption. In addition, some organizations adopt multi-cloud strategies for data control reasons. Certain compliance,regulations and governance policies require organizations data to reside in particular locations. Multi-cloud strategycan help organizations to meet those requirements, because the organizations can select different cloud models from various cloud service providers.

A computing infrastructure can be classified as cloud only if has these 5 essential characteristics.

Measured Service Rapid Elasticity Resource Pooling On Demand Self Service Broad Network Access

Media-based restore:

Media-based restore: Ifa large amount of data needs to be restored and sufficient bandwidth is not available, then the consumer may request the service provider for data restoration using backup media such as DVD or disk drives. Inthis option, the service provider gathers the data to restore, stores data to a set of backup media, and ships it to the consumer for a fee.

Service Catalog

Menu of services that list services, attributes of services, service level commitments, terms and conditions for service provisioning, and prices and services

Microservice Architecture described

Microservicearchitecture, or simply microservices, is a distinctive method of developing software systems that has grown in popularity in recent years. Inthis architecture, the application is decomposed into small, loosely coupled, and independently operating services. A microservice runs in its own process and communicates to other services via REST APIs. Every microservice can be deployed, upgraded, scaled, and restarted independent of other services in the application. When manage by an automated system, teams can frequently update live applications without negatively impacting users.

Architecture: (Modern Apps)

Modern application architectures adopt certain traits that enable them to be flexible and make the best use of resources. Their architectures tend to be more component-based so they can perform in a distributed fashion.

Collaboration: (Modern Apps)

Modern application collaboration distributes and scales development efforts across both internal and external audiences, regardless of physical location. The collaborationcreates global communities of active peer-review participants that accelerate the production of quality code in modern applications and decrease time to value. Social development fosters innovation because the tools facilitate communication and collaboration between developers to more rapidly and reliably build applications. Distributed source code management provides developers with controls to enable multiple developers to work on code simultaneously. Also providing developers with flexibility to select which new features to incorporate into production applications.

Scalability: (Modern Apps)

Modern applications must scale software and hardware resourcesso that the user does not detect any degradationin the application. Modern applications are elastic, handling both user loadand failure recovery. Legacy applications have added capacity by scaling-up. Modern applications add capacity by scaling either up or out.

Multiplatform: (Modern Apps)

Modern applications recognize the proliferation and constant evolution of platform frontends that need access to the same backend. The access demands of an increasingly global and mobile audience further dictate a backend separation to enable it to scale independently of the many connecting platforms. Modern applications need to cater to both hardware-specific and crossplatform functionality. And, also addressing both web standards such as HTML, CSS, and JavaScript and proprietary standards such as SDKs. Handling this deluge and variety of functionality, standards, and big dataoften requires developers to usehybrid frameworks and platformaddressing these requirements.

Resilience: (Modern Apps)

Modern applications respond to failures and overcome them to maintain operation. In this case, failure is the inability for a part of the application to communicate with another. The failure could be because the communication channel is unavailable such as a network outage. In this case, modern applications continue to operate despite this failure. This resilience to failure is necessary for mobile applications that cannot depend on network or service access always. Resilient modern applications assume that software components fail at some point and that it is not enough for the application to identify and log failure. The system must adapt and recover from failure in real time. Components must handle the application load in an elastic fashion to keep up SLA levels. Lastly, architectures are API-enabled so they can connect into multiple ecosystems dynamically to drive services.

Traditional Application vs Modern Application

Monolith -> Distributed Common Program Language -> Multiple Program Languages Closed Source -> Open Source Resilency/Scale = Infrastructure -> Resilency/Scale = Application Infrastructure is application specific -> Infrastructure is application agnostic PC-based devices -> Large variety of devices (BYOD) Seperate Build/Test/Run -> Continous development and deployment Example: CRM,ERP, Email Outlook -> Facebook, Uber, Netflix

Monolithic Application

Monolithic applications are developed as tightly coupled code, installable as a single package. Monolithic software is designed as self-contained. The components of the program are interconnected and interdependent rather than loosely coupled as is the case with modular software programs. In a tightly coupled architecture, each component and its associated components must be present to execute or compile the code.

Simplified infrastructure management

Moreover, when an organization uses cloud services, their infrastructure management tasks are reduced to managing only those resources that are required to access the cloud services. The cloud infrastructure is managed by the cloud provider and tasks such as software updates and renewals are handled by the provider.

Multifactor Authentication (MFA)

Multifactor Authentication (MFA) uses more than one factor to authenticate a user. A commonly implemented 2-factor authentication process requires the user to supply both something the user knows, such as a password and also something the user has,such as a device. The second factor might also be a password generated by a physical device known as token, which is in the user's possession. The password generated by the token is valid for a predefined time. The token generates another password after the predefined time is over. Extra factors may also be considered to further enhance the authentication process. Examples of extra factors that may be used include unique ID number, and user's past activity. A multifactor authentication technique may be deployed using any combination of these factors. A useris grantedaccess to the environment only when all the required factors are validated.

Multipathing

Multipathing enables service providers to meet aggressive availability and performance service levels. It enables a compute system to use multiple paths for transferring data to a LUN on a storage system. Multipathing enables automated path failover that eliminates the possibility of disrupting an application or service due to the failure of an adapter, cable, port, and so on. In the event of a path failover, all outstanding and subsequent I/O requests are automatically directed to alternative paths.

Network virtualization

Network virtualization is the process of combiningthe multiple networks into a single or multiple logical network or dividing a single network into multiple logical networks to achieve security. Network virtualization can be implemented in two methods: •Virtual Local Area Network (VLAN): is a virtual network created on a physicalLAN. VLAN technology can divide a large LAN into smaller virtual LANs or combine separate LANs into one or more virtual LANs. A VLAN enables communication among a group of nodes based on the functional requirements of the group, independent of the node's location in the network. •Virtual Storage Area Network (VSAN): is a virtual network created on a physical SAN.They provide the capability to build larger consolidated fabrics and still maintain the required security and isolation between them. In a cloud environment, VLAN, and VSAN ensure security by providing isolation over the shared infrastructure. Each consumer may be provided VLANs and VSANs to ensure that their data is separated from other consumers.

Network virtualization

Network virtualization is the technique of abstracting physical network resources to create virtual network resources. Network virtualization software is either built into the operating environment of a network device, installed on an independent compute system or available as hypervisor's capability. Network virtualization software has the ability to abstract the physical network resources such as switches and routers to create virtual resources such as virtual switches. It also has the ability to divide a physical network into multiple virtual networks, such as virtual LANs and virtual SANs. Network virtualization available as a hypervisor's capability can emulate the network connectivity between VMs on a physical compute system. It also enables creating virtual switches that appear to the VMs as physical switches.

Operating System Hardening

Operating system manages all the underlying cloud infrastructure and hides the complexities required to run the applications on those infrastructures. It also provides portability and interoperability across cloud environments. Hackers might carry out their attacks on OS to gain control of the entire environment. So, operating system hardening becomes important. Operating system hardening typically includes deleting unused files and programs, installing current operating system updates, patches, and configuring system and network components following a hardening checklist. These hardening checklists are typically provided by operating systems vendors or organizations.

Orchestrated service delivery

Orchestrated service delivery involves automated arrangement, coordination, and management of various system or components. These system functions in an IT infrastructure enable a user to make a service request using the self-service portal. The service request is then passed to the orchestration and automationsoftware. Further, the software interacts with the components across the infrastructure to invoke the provisioning.AppropriateIT resourcesare provisioned that are necessary to satisfy the requirements ofservices.These services are then made available to the user through the self-cloud service component.

Web application hosting

Organizations may host mission-critical applications on a private cloud, while less critical applications are hosted on a public cloud. By deploying less critical applications in the public cloud, an organization can leverage the scalability and cost benefits of the public cloud. For example, e-commerce applications, such as online retail stores are often three-tierapplications that use public-facing web assets outside the firewall and business-critical assets onsite. These applications can be hosted in the public cloud. Also, such applications typically have dynamic and unpredictable resource requirements, which can be difficult to plan for when hosting them inan organization'sprivate cloud. As mentionedin the cloud burstinguse case,such applications can get additionalcapacity on-demand from the public cloud for a limited timeperiod.

Performance monitoring

Performance monitoring evaluates how efficiently different services and infrastructure components are performing. It examines performance metrics such as response time, throughput, I/O wait time, and processor utilization of services and infrastructurecomponents. It helps to identify performance bottlenecks. Performance monitoring also helps in analyzing the performance trends of infrastructure components and finding potential performance degradation or failure of a service. The example illustrates the importance of monitoring performance of a service. In this context, a service comprisesbelowelements: •2 webservers •1 application server •1 database server •2 load balancers distributing clients connections across web servers •1IP router enabling network address translation (NAT) •1 firewall filtering client traffic from Internet to the web servers •VLAN 10, 20, 30, 40 interconnecting above elements as shown in the figure

Physical security

Physical security is the foundation and critical layer of any IT security strategy. Itis important for the cloud service provider to enforce strict policies, processes, and procedures for a successful physical security. It is important,as securing the cloud environment drills down to having a secure data center. To secure the infrastructure, thecloud service provider must deploy the following physical security measures: 1.24/7/365 onsite security at the entrance of the data center to guardthe data center facility 2.Biometric or security badge-based authentication to grant access to the datacenter 3.Videosurveillance cameras [CCTV] at every vital area of the datacenter to monitor the activity 4.Redundantutilities for Heating, Ventilation,and Air Conditioning (HVAC) systems to ensure that they continue to operate if there is awide power outage 5.Sensors and alarms to detect unusualactivities and fire 6.Restrict the visitors from carrying mobile, laptop,or USB devices inside the data center and use metal detection to screen the visitors

Preventive control:

Preventive control: The goal is to avoid a vulnerability being exploited, by strengthening the security mechanisms against the incidents within cloud. The subset of preventive control is deterrent control which aims to reduce the likelihood of a vulnerability being exploited in a cloud environment by warning the attackers that there are adverse consequences if they proceed. Example: physical security for the data center, firewall, hardening,and authentication mechanism.

Private Virtual LAN (PVLAN):

Private Virtual LAN (PVLAN): A private VLAN (PVLAN) is an extension of the VLAN standard and further segregates the nodes within a VLAN into secondaryVLANs. A PVLAN is made up of a primary VLAN and one or more secondary or private VLANs. The primary VLAN is the original VLAN that is being segregated into smaller groups. Each secondary PVLAN exists only inside the primary VLAN. It has a unique VLAN ID and isolates the OSI Layer 2 traffic from the other PVLANs.

ROBOBackup-

ROBOBackup-Remote office/ branch office backup. Today, businesses have their remote or branch offices spread over multiple locations. Typically, these remote offices have their local IT infrastructure. This infrastructure includes file, print, Web, or email servers, workstations, and desktops, and might also house some applications and databases. Too often, business-critical data at remote offices are inadequately protected, exposing the business to the risk of lost data and productivity. As a result, protecting the data of an organization's branch and remote offices across multiple locations is critical for business. Traditionally, remote-office data backup was done manually using tapes, which were transported to offsite locations for DR support. Some of the challenges with this approach were lack of skilled onsite technical resources to manage backups and risk of sending tapes to offsite locations, which could result in loss or theft of sensitive data. Also, branchoffices have less IT infrastructure to manage backup copies, and huge volumes of redundant data available across remote offices. Huge cost is required to manage these ROBO backup environment.

RSA Archer eGRCsolutions

RSA Archer eGRCsolutions allow an organization to build an efficient, collaborative enterprise governance, risk and compliance (eGRC) program across IT, finance, operations and legal domains. With RSA Archer eGRC, an organization can manage risks, demonstrate compliance, automate business processes, and gain visibility into corporate risk and security controls. RSA delivers several core enterprise governance, risk, and compliance solutions, built on the RSA Archer eGRCPlatform. Business users have the freedom to tailor the solutions and integrate with multiple data sources through code-free configuration.

RSA Data Protection Manager

RSA Data Protection Manager is an easy-to-use security management tool for encrypting keys at database, file servers, and storage layers. It is designed to lower the total cost of ownership and simplify the deployment of encryption throughout the enterprise. The key manager is a part of RSA Data Protection Manager, which combined with application encryption and tokenization, forms a comprehensive platform for enforcing and managing the security of sensitive data.Keymanagement is an important part of meeting compliance standards and with traditional approaches it is difficult for administrators to ensure that the keys are handled in a proper manner. Data Protection Manager helps to simplify the implementation of encryption by assisting with the integration of enterprise key management across diverse information infrastructure.

RSA NetWitness Suite

RSA NetWitness Suite provides universal visibility across a modern IT infrastructure, enabling better and faster detection, investigation and response to security incidents. This suite include: RSA Netwitnesslogs and packets: itcaptures real-time data from logs and network packets and applies deep analytics and machine learning to detect and recognize threats before the attacker can cause the intended damage. RSA NetWitnessEndpoint provides visibility into IT endpoints at the user and kernel level and determines and blocks the malicious processes. RSA NetwitnessSecopsManager integrates with RSA Archer eGRCto prioritize, investigate andrespond to security incidents.

RSA SecurID

RSA SecurIDis a 2-factor authentication mechanism, used to protect the network resources. It authenticates the user based on two factors: •Password/PIN •Code A password/PIN is a string of characters set by the user, and the code is a unique number displayed on the SecurID device. RSA SecurID generates a unique code every 60 seconds. The combination of a password and a code is called passcode. To access a protected resource, the user combines the PIN that the user hasset with a code that appears on the device at that given time. Then the RSA SecurIDsystem decides to allow or deny access. RSA SecurID are as simple to use as entering a password, but much more secured.

4 approaches to application transformation are

Refactor Revise Retain Retire

Remote backup service:

Remote backup service:In this option, consumers do not perform any backup at their local site. Instead, their data is transferred over a network to a backup infrastructure, managed by the cloud service provider. To perform backup to the cloud, typically the cloud backup agent software is installed on the servers that need to be backed up. After installation, this software establishes a connection between the server and the cloud where the data will be stored. The backup data transferred between the server and the cloud is typically encrypted to make the data unreadable to an unauthorized personor system. Deduplication can also be implemented toreduce the amount of data to be sent over the network (bandwidth reduction) and reduce the cost of backup storage.

Replicated backup service:

Replicated backup service:This is an option where a consumer performs backup at their local site but does not want to either own or manage or incur the expense of a remote site for disaster recovery purposes. For such consumers, they choose replicated backup service,where the backup data in their site is replicated to the cloud (remote disaster recovery site).

Role Based Access Control (RBAC)

Role Based Access Control (RBAC) is a secure method of restricting access to the user based on their respective roles. A role may represent a job function, for example, a storage administrator. The only privileges assigned to the role are the privileges required to perform the tasks associated with that role. This method provides a greater degree of control over cloud resources. Also, clear separation of duties must be ensured so that no single individual can both specify an action and carry it out. For example, the person who authorizes the creation of administrative accounts should not be the person who uses those accounts.

Role-based access

Role-based access defines who can use the platform and how. Cloud-native platform uses role-based access control (RBAC), with each role granting permissions to a specific environment the user is targeting.

Sandboxing

Sandboxing is another mechanism used for application security. The customers run many of their applications on cloud infrastructure and cloud providers themselves offer many applications in the SaaS model. It becomes vital to isolate the trusted applications and data from interacting with the other applications running on the same infrastructure or other, which might contain malware functionality. Sandboxing mechanism provides the isolation capability by packaging the application and data with the infrastructure that it runs on along with the security policies. It restricts the resources and privileges that the application can access to isolate the execution of one application from other applications. When, creating a sandbox environment an administrator defines resources that an application can access while being tested. Sandboxing mechanism is typically usedfor testing and verifying the unproven or untrusted application. It replicates the production environment to test and verify the untrusted application, so that it does not affect the other trusted applications. It can also be used to analyze the threats in the environment. Sandboxing is also used during the software development lifecycle where the uniform environment can be created for the development, testing, and production teams to avoid discrepancies between environments.

Security monitoring

Security monitoring tracks unauthorized access and configuration changes to the cloud infrastructure and services. It detects all service-related operations and data movement that deviate from the security policies implemented and included in the service contract by the provider. Security monitoring also detects unavailability of services to authorized consumers due to security breaches. Physical security within the cloud service provider's premises should also be continuously monitored using appropriate technologies such as badge readers, biometric scans, and video cameras.

Security-as-a-Service (SECaaS)

Security-as-a-Service (SECaaS) is a service that delivers various security mechanisms throughthe cloud. It offers mechanismslike IAM, IDPS, encryption,and anti-virus delivered over the Internet. The customers can utilize this service from the cloud service provider for the security management. So that it reduces the security management burden on the customers and enables them to focus on their core competencies. It provides a greater security expertise thanthe expertise available within the organization. It saves time and money for the customers by outsourcing the administrative tasks related to security. The challenge associated with SECaaSis that, as in every SaaS deployment, consumers do not have complete visibility and control over the service. The consumer is responsible for setting the security policies, but the cloud service provider manages the service. However,the key to cloud security is selecting a cloud service provider with strong security services and policies.

Server-centricbackup:

Server-centricbackup: This is the traditional data protection solution which is focused on backing up data from the servers onto backup storagesuch as disk or tape. After a period of time, backups are sent or copied to an offsite location. A separate backup infrastructure is used to control all backup operations. It is a one-size-fits-all solution that works in the same way irrespective of the capability of the underlying infrastructure of the servers. The limitation of this solution is that it may not scale well with the rapid growth of data and servers in a data center. Depending upon the backup storage media, data recovery may be slow and may require recall of offsite tapes.

Service Automation

Service automation overcomes the challenges of service provisioning and service management to achieve business agility. Once the organizations have infrastructure, the next step is to deploy the tools that enable the users to request services using self-service portal and catalog. Also, it is critical to automate IT operations and service delivery. The automation functions include the automated operations management and orchestrated service delivery processes. These service deliveries are used to provision the appropriate resources and trigger the tasks that make up a service. Any manual process that must be done as a predictable, repeatable step must be eliminated via management, automation, and orchestration tools across the IT stack.

Service automation provides several benefits.

Service automation provides several benefits. The key benefits are: reduces service provisioning time, eliminates the possibility of manual errors, reduces operating expenses, and simplifies cloud infrastructure management.

The key functions of service catalog management are described below:

Service catalog management team is responsible for the designand implementation of the service catalog. It updates the service catalog to incorporate new service offerings or changes to the existing service offerings. Changes to the service offerings are communicated to service catalog management through an orchestrated workflow that first routes change decisions through a change management process. Following affirmative change decisions, the service catalog management updates the service catalog to include the new services and/or changes to the existing service offerings. Change management is described later in this module. •Service catalog management team ensures that the information in the service catalog is up-to-date. It emphasizes clarity, completeness, and usefulness when describing service offerings in the service catalog. It also ensures that the features, intended use, comparisons, prices, order process, and levels of services are unambiguous and valuable to the consumers.

Service management

Service management function specifies adoption of activities related to service portfolio management and service operation management. Adoption of these activitiesenables an organization to align the creation and delivery of cloud services to meet their business objectives and to meet the expectations of cloud service consumers.

Service operation management

Service operation management ensures that the cloud services are delivered effectively and efficiently that meets the consumer requirements. Itinvolves on-goingmanagement activities to maintain cloud infrastructure and deployed services. All these activities have the goal of ensuring that services and service levels are delivered as committed. The slide lists the common service operation management activities.

Service operation management:

Service operation management: It maintains cloud infrastructure and deployed services, ensuring that services and service levels are delivered as committed.

Service orchestrator software enables

Service orchestrator software enables the automated arrangement, coordination, and management of various system or component functions in a cloud infrastructure to provide and manage cloud services. Cloud service providers typically deploy a purpose-designed orchestration software or orchestrator that orchestrates the execution of various system functions. The orchestrator programmatically integrates and sequences various system functions into automated workflows for executing higher-level service provisioning and management functions. The orchestration workflows are not only meant for fulfilling requests from consumers. It also helps for administering cloud infrastructure, such as adding resources to a resource pool, billing, and reporting and so on.

Service portfolio management:I

Service portfolio management:It defines the suite of service offerings, called service portfolio, aligning it to the provider's strategic business goals.

Cloud Service LifeCycle - Service Termination (phase 4)

Service termination deals with the end of the relationship between the cloud service provider(CSP) and cloud service consumer(CSC). The reasons for the service termination by the CSP and CSC are listed on the slide.

Single Sign-on (SSO)authentication

Single Sign-on (SSO)authentication feature enables the user to use one set of credentials, to access multiple applications residing in cloud andin their data center. This mechanism saves time for the user as they do not have to enter their credentials each time they try to access different applications. Also,they do not have to remember many username and passwords. Example: In hybrid cloud, users can enter the same credentials to access multiple applicationsacross the on-premise data center and the public cloud.

Software-defined compute (SDC)

Software-defined compute (SDC) is an approach to provision compute resources using compute virtualization technology enabled by the hypervisor. Hypervisor decouples the application and the operating system (OS) from the hardware of the physical compute system and encapsulates them in an isolated virtual container called a virtual machine (VM). It controls the allocation of hardware resources such as processing power and memory space to the VMs based on the configured policies. This means, the hardware configuration of a VM is defined and maintained using a software. This is in contrast to a traditional data center where a compute system is defined by its physical hardware devices.

Software-defined infrastructure

Software-defined infrastructure is an approach that abstracts, pools, and automates all resources in a data center environment to achieve IT-as-a-service (ITaaS). Intelligent, policy-driven software controls and manages SDI. It enables organizations to evolve beyond outdated, hardware-centric architectures and create an automated, easily managed infrastructure. It supports both second platform and third platform applications for fast deployment across data centers.

Software-defined networking (SDN)

Software-defined networking (SDN) is the networking approach that enables an SDN software or controller to control the switching and routing of the network traffic. The SDN controller abstracts the physical details of the network components and separates the control plane functions from the data plane functions. It provides instructions for data plane to handle network traffic based on policies configured on the SDN controller. Like SDS controller, SDN controller also provides CLI and GUI for administrators to manage the network infrastructure. It alsoallows them to configure policies and APIs for external management tools and application to interact with the SDN controller. Most commonly known open source SDN controllers are the OpenDaylightopen-source SDN controller, theOpenContrailSDN controller, VMware NSX, Floodlight open SDN controller, and theFlowVisorOpenFlowcontroller.

Software-defined:

Software-defined on commodity hardware is a cost-efficient way to support modern applications. Further, a software-defined approach allows organizations to automate the configuration of IT resources, and delivery of policy-based IT resources through automation. Software-defined approachenables provisioning of IT resources faster, ensuring that IT is no longer the bottleneck for application development cycles.

Software-defined storage (SDS)

Software-defined storage (SDS) is an approach to provision storage resources in which the software,SDS controller,controls storage-related operations independent of the underlying physical storage infrastructure. The physical storage infrastructure may include multiple storage types including commodity storage resources and may support various data access methods such as block, file, and object. The SDS controller abstracts the physical details of storage. Such as drive characteristics, formats, location, and low-level hardware configuration. It delivers virtual storage resources such as a block storage volume and a share drive. The SDS controller controls the creation of virtual storage from physical storage systems. The allocation of storage capacity based on policies configured on the SDS controller. Most commonly known SDS controllers are OpenSDS, CoprHD, and OpenStackSwift Storage Controller.

performance management

The goals of performance management are to monitor, measure, analyze, and maintain orimprove the performance of cloud infrastructure and services. The key functions of performance management are described below:

Monitoring

Some key benefits of monitoring are described below: •Monitoring provides information on the availability and performance of various services and the infrastructure components or assets on which they are built. Cloud administrators use this information to track the health of infrastructure components and services. •Monitoring helps to analyze the utilization and consumption of cloud resources by service instances. This analysis facilitates capacity planning, forecasting, and optimal use of these resources. •Monitoring events help to trigger automated routines or recovery procedures. The examplesof events are change in the performance, availability state of a component or a service, and so on. Such procedures can reduce downtime due to known infrastructure errors and reduce the level of manual intervention needed to recover from them. •Monitoring is the foundation of metering, reporting, and alerting. Monitoring helps in metering resource consumption by service instances. It helps in generating report for billing and trends. It also helps to trigger alerts when thresholds are reached, security policies are violated, and service performance deviates from SLA. Alerting and reporting are detailed later in this lesson. •Additionally monitored information may be made available to theconsumers and presented as metrics of the cloud services.

Retain:

Some of the traditional application workloads are useful to the business but not suitable for the cloud. These workloads continue in their existing environments and benefit from the ongoing improvements in data center infrastructures. For example, the native applications with proprietary architectures and legacy code are difficult to move and run in the cloud. And these applications are difficult to refactor or revise.

Sourcebased Data Deduplication:

Sourcebased Data Deduplication:It eliminates redundant data at the source (backup client) before transmission to the backup device. The deduplication software or agent on the clients checks each file or block for duplicate content. Source-based deduplication reduces the amount of data that is transmitted over a network from the source to the backup device, thus requiring less network bandwidth. There is also a substantial reduction in the capacity required to store the backup data. However, a deduplication agent running on the client may impact the backup performance, especially when a large amount of data needs to be backed up. When image-level backup is implemented, the backup workload is moved to a proxy server. The deduplication agent is installed on the proxy server to perform deduplication without impacting the VMs running applications. Service provider can implement source-based deduplication when performing backup (backup as a service) from consumer's location to provider's location.

Risk Management

Step 1-Risk identification: Points to the various sources of threats that cause risk. After identifying risks in a cloud, document the risk and their sources, and classify into meaningful severity levels. Step 2-Risk assessment: Determines the extent of potential threat and the risks associated with cloud resources. The output of this process helps the cloud service provider to identify appropriate controls for reducing or eliminating risk during the risk mitigation process. Step 3-Risk mitigation: Involves planning and deploying various security mechanisms that can either mitigate the risks or minimize the impact of the risks. Step 4-Monitoring: Involves continuous observation of existing risks and security mechanisms to ensure their proper control. This step also observes new risks that may arise. If a new risk is identified, then the entire process is repeated.

Compute-to-storage network:

Storage may be connected directly to a compute system or over a Storage Area Network-SAN. A SAN enables the compute systems to access and share storage systems. Sharing improves the utilization of the storage systems. Using a SAN facilitates centralizing storage management, which in turn simplifies and potentially standardizes the management effort. SANs are classified based on protocols they support. Common SAN deployments types are FibreChannel SAN (FC SAN), Internet Protocol SAN (IP SAN), and FibreChannel over Ethernet SAN (FCoESAN). Connectivity and communication between compute and storage are enabled through physical components and interface protocols. The physical components that connect compute to storage are host interface device, port, and cable.

Storage resiliency

Storage resiliency can also be achieved by using a storage virtualization appliance. A virtualization layer created at SAN using virtualization appliance abstracts the identity of physical storage devices and creates a storage pool from heterogeneous storage systems. Virtual volume is created from the storage pool and assigned to the compute system. Instead of being directed to the LUNs on the individual storage systems, the compute systems are directed to the virtual volumein the virtualization layer.

Storage virtualization

Storage virtualization is the technique of abstracting physical storage resources to create virtual storage resources. Storage virtualization software has the ability to pool and abstract physical storage resources, and present them as a logical storage resource, such as virtual volumes, virtual disk files, and virtual storage systems. Storage virtualization software is either built into the operating environment of a storage system, installed on an independent compute system, or available as hypervisor's capability.

Synchronous replication

Storage-basedremote replication solution can avoiddowntime by enabling business operations at remote sites. Storage-based synchronous remote replication provides near zero RPO where the target is identical to the source at all times. In synchronous replication, writes must be committed to the source and the remote target prior to acknowledging "write complete" to the production compute system. Additional writes on the source cannot occur until each preceding write has been completed and acknowledged. This ensures that data is identical on the source and the target at all times. Further, writes are transmitted to the remote site exactly in the order in which they are received at the source. Therefore, write ordering is maintained andit ensures transactional consistency when the applications are restarted at the remote location. As a result, the remote images are always restartable copies. The figure on the slide illustrates an example of synchronous remote replication. If the source site is unavailable due to disaster, then the service can be restarted immediately in the remote site to meet the required SLA.

Key Attributes for IT to transform?

Strategic business partner Service-centric Manage supply and demand of services Build products & services that support business objectives Cultivate line of business relationship

System and Application Vulnerabilities

System and application vulnerabilities arethe exploitable bugs in programs that hackers can use to gain access, to compromise the security of the system. Vulnerabilities within a system or an application could be because of program errors or intended features. Vulnerabilities indicate that the system or an application is open for attacks. As cloud computing supports multitenancy and resource pooling, it creates anattack surface for the hackers. This type of threats can be controlled by installing security patches or upgrades, regular vulnerability scanning, preventing applications from gaining access to complete files on the disk or blocking the access to other applications on the disk. The cost of having processes in place to discover and rectify the system or application vulnerabilities is smaller compared to the damage that they can cause to the business.

Targetbased Data Deduplication:

Targetbased Data Deduplication: It occurs at the backup device, which offloads the deduplication process and its performance impact from the backup client. In target-based deduplication, the backup application sends data to the target backup device where the data is deduplicated, either immediately (inline) or at a scheduled time (post-process). With inline data deduplication, the incoming backup stream is divided into small chunks, and then compared to data that has already been deduplicated. The inline deduplication method requires less storage space than the post process approach. However, inline deduplication may slow down the overall data backup process. Some vendors' inline deduplication systems leverage the continued advancement of CPU technology to increase the performance of the inline deduplication by minimizing disk accesses required to deduplicate data. Such inline deduplication systems identify duplicate data segments in memory, which minimizes the disk usage.

Technical mechanisms

Technical mechanisms are implemented through the tools or devices deployed on the cloud infrastructure. Toprotect the cloud operations, we further classify the technical security mechanisms into two types: •Mechanisms deployed at the application level: Application security is a critical component of any IT security strategy. Applications running on cloud infrastructure are more frequently accessed via network, that means, they become vulnerable to variety of threats. Various security mechanisms must be deployed at the application level by the cloud providers and the consumers to provide a secure environment for the application users. These mechanisms include Identity and Access Management (IAM), role-based access control, application hardeningand sandboxing •Mechanisms deployed at the Infrastructure level: Itis equally important to secure the cloud infrastructure that runs the cloud provider's services. Apart from securing the infrastructure physically, various technical mechanisms must be deployed at the compute, network,and storage level to protect the sensitive data. These mechanisms include: firewalls, Intrusion Detection, and Prevention System (IDPS), network segmentation, Virtual Private Network (VPN),encryption and data shredding.

Agile Model

The Agile model is more adaptive, offering an iterative approach with limited features added per iterative cycle. The features are estimated, while resources and time are fixed. This approach ensures that key features are built first, and proven viable before the project proceeds.

Lean practices described

The goal of lean practices is to create a high-quality software product in a shortest time period at the lowest cost.

The VMware vCenterOperations Management Suite

The VMware vCenterOperations Management Suite includes a set of tools that automates performance, capacity, and configuration management, and provides an integrated approach to service management. It enables IT organizations to ensure service levels, optimum resource usage, and configuration compliance in virtualized and cloud environments. The vCenterOperations Management Suite includes four components. These components are described below: •vCenterOperations Manager provides operations dashboards to gain visibility into the cloud infrastructure. It identifies potential performance bottlenecks automatically and helps remediate them before consumers notice problems. Further, it enables optimizing usage of capacity and performs capacity trend analysis. •vCenterConfiguration Manager automates configuration management tasks such as configuration data collection, configuration change execution, configuration reporting, change auditing, and compliance assessment. This automation enables organizations to maintain configuration compliance and to enforce IT policies, regulatory requirements, and security hardening guidelines. •vCenterHypericmonitors hardware resources, operating systems, middleware, and applications. It provides immediate notification if application performance degrades or unavailable. The notification enables administrators to ensure availability and reliability of business applications. •vCenterInfrastructure Navigator automatically discovers application services running on the VMs and maps their dependency on IT infrastructure components.

Waterfall model

The Waterfall model typically usedin traditional application development and can be described as a linear or continuous model of software design. Requirements are fixed at the beginning, and resources and time are estimated, but often need to be adjusted later on at greater cost.

Scaleout:

The ability to scale to the ever increasing levels of web connected users and the load they produce is the core characteristic of modern applications. So, the infrastructure provides the capabilityto support the elastic scaling of resources to meet the modern applications requirements. The systems architecture of modern IT infrastructure uses the concept of auto scaling to enable scaling on demand. Resource instances must be created quickly in these fast-paced distributed environments. Such resources are modular and have a minimal footprint, which enables them to scale more easily. For example, converged and hyper-converged infrastructure enables IT organizations to start small and easily scale capacity and performance nondisruptivelyto support the modern applications.

Revise:

The application is enriched by adding new modules and features using the cloud, while making minimal changes to the legacy code. The application becomes a hybrid of old and new modules, cooperating through integration and messaging patterns using HTTP, REST, AMQP, and other protocols of information exchange.

Cloud Reference Architecture

The cloud computing reference architecture is an abstract model that characterizes and standardizes the functions of a cloud computing environment. The figure on the slide is the reference architecture depicting the core components of the cloud architecture and some of the key functionsthat specify various activities, tasks, and processes that are required to offer reliable and secure cloud services to the consumers.

Supplier Management

The goal of supplier management is to ensure that all contracts with the suppliers of cloud products, technologies, and supporting services meet the business requirements. And that the suppliers adhere to contractual commitments. Cloud service providers usually obtain IT products and services from multiple suppliers to build and maintain cloud infrastructure and provide cloud services to their consumers. Examples of suppliers to cloud service providers are: •Hardware and software vendors •Network and telecom service providers •Cloud service broker, and so on

The financial management team

The financial management team, in collaboration with the provider's finance department, plans for investments. Financial management helpsto provide cloud services and determines the IT budget for cloud infrastructure and operations for the lifecycle of services. The financial management team is responsible for providing any necessary business cases for investments. The finance department may help out with cost analysis, budget adjustment, and accounting. The business case usually includes financial justification for a service-related initiative. Also it includes demand forecast of services, service stakeholders inputs, sources of initial and long-term funding, and value proposition for the business. The business case provides visibility into the financials and helps communicate the initiatives to the top executives.

Detective control:

The goal is to detect an attack that has occurred in the cloud environment and alert the monitoring team. They are used when the preventive controls are failed. Example: Audit trails and logs.

The goal of availability management is

The goal of availability management is to ensure that the stated availability commitments are consistently met. The availability management process optimizes the capability of cloud infrastructure, services, and the service management team to deliver a cost effective and sustained level of service that meets SLA requirements. The activities of availability management team are described below: •Gathers information on the availability requirements for upgraded and new services. Different types of cloud services may be subjected to different availability commitments and recovery objectives. A provider may also decide to offer different availability levels for same type of services, creating tiered services. •Proactively monitors whether availability of existing cloud services and infrastructure components is maintained within acceptable and agreed levels. The monitoring tools identify differences between the committed availability and the achieved availability of services and notify administrators through alerts. •Interacts with incident and problem management teams, assisting them in resolving availability-related incidents and problems. Through this interaction, incident and problem management teams provide key input to the availability management team regarding the causes of service failures. Incident and problem management also provide information about errors or faults in the infrastructure components that may cause future service unavailability. With this information, the availability management team can quickly identify new availability requirements and areas where availability must be improved. •Analyzes, plans, designs, and manages the procedures and technical features required to meet current and future availability needs of services at a justifiable cost. Based on the SLA requirements of enhanced and new services, and areas found for improvement, the team provides inputs. The inputsmay suggest changes in the existing business continuity (BC) solutions or architect new solutions that provide more tolerance and resilience against service failures. Some examples of BC solutions are clustering of compute systems and replicating database and file systems.

The goal of capacity management is

The goal of capacity management is to ensure that a cloud infrastructure is able to meet the required capacity demands. The capacity demands for cloud services are met in a cost effective and timely manner. Capacity management ensures that peak demands from consumers can be met without compromising SLAs. And at the same time optimizes capacity utilization by minimizing spare and stranded capacity. The key functions of capacity management are described below: •Capacity management team determines the optimal amount of resources required to meet the needs of a service. It ismetregardless of dynamic resource consumption and seasonal spikes in resource demand. Withtoo few resources, consumers may have to wait for resources or their requests may be rejected until more resources are available. With too many resources, the service cost may rise due to maintenance of many unused, spare resources. Effective capacity planning maximizes the utilization of available capacity without impacting service levels. Common methods to maximize capacity utilization are: •Resource pooling •Automated VM load balancing across hypervisors •Dynamic scheduling of virtual processors across processing cores •Thin provisioning •Automated storage tiering •Dynamic VM load balancing across storage volumes •Converged network •WAN optimization •Automatic reclamation of capacity when a service is terminated

The goal of change management

The goal of change management is to standardize change-related procedures in a cloud infrastructure for prompt handling of all changes with minimal impact on service quality. A good change management process enables the cloud service provider to respond to the changing business requirements in an agile way. Changemanagementgreatly minimizes risk to the organization and its services. Examplesof change are: •Introduction of a new service offering •Modification of an existing service's attributes •Retirement of a service •Hardware configuration change •Expansion of a resource pool •Software upgrade •Change in process or procedural documentation

The goal of incident management i

The goal of incident management is to restore cloud services to normal operational state as quickly as possible. An incident may not always cause service failure. For example, the failure of a disk from a mirrored set of RAID protected storage does not cause data unavailability. However, if not addressed, recurring incidents leads to service interruption in the future. The key functions of incident management are described below:

The goal of information security management is

The goal of information security management is to prevent the occurrence of incidents or activities adversely affecting the confidentiality, integrity, and availability of informationand processes.Protects corporate and consumer data to the extent required to meet regulatory or compliance concerns both internal and external, and at reasonable/acceptable costs. The interests of all stakeholders of a cloud service, including consumers who rely on information and the IT infrastructure, are considered. Thekey functions of information security management are described below: •Information security management team implements the cloud service provider's security requirements. It develops information security policies that govern the provider's approach towards information security management. These policies may be specific to a cloud service, an external service provider, an organizational unit, or they can be uniformly applicable. Top executivemanagement approves the Information security policies. These security assurances are often detailed in SLAs and contracts. Information security management requires periodic reviews and, as necessary, revision of these policies. •Information security management team establishes a security management framework aligned with the security policies. The framework specifies security architecture, processes, mechanisms, tools, responsibilities for both consumers and cloud administrators, and standards needed to ensure information security in a cost-effective manner. The security architecture describes the following: •The structure and behavior of security processes •Methods of integrating security mechanisms with the existing IT infrastructure •Service availability zones •Locations to store data •Security management roles.

hybrid cloud archiving

The hybrid cloud has become the choice for many organization. In hybrid archiving option, an organization archives critical data to the on-premise archiving infrastructure and archives non-critical data to the public cloud archiving infrastructure. The hybrid cloud allows organizations to distribute the archiving workload and also allows to make use of public cloud for rapid resource provisioning. The figure on the slide illustrates an organization's critical data which is archived to the private cloud and non-critical data which is archived to the public cloud.

The key benefits of data deduplication are:

The key benefits of data deduplication are: Reduces infrastructure costs: By eliminating redundant data from the backup, the infrastructure requirement is minimized. Data deduplication directly results in reduced storage capacities to hold backup images. Smaller capacity requirements mean lower acquisition costs as well as reduced power and cooling costs. Enables longer retention periods: As data deduplication reduces the amount of content in the daily backup, users can extend their retention policies. This can have a significant benefit to users who currently require longer retention. Reduces backup window: Data deduplication eliminates redundant content of backup data, which results in backing up less data and reduced backup window. Reduces backup bandwidth requirement: By utilizing data deduplication at the client, redundant data is removed before the data is transferred over the network. This considerably reduces the network bandwidth required for sending backup data to remote site for DR purpose.

Cross Functional Team

The organization must create cross-functional teamto achieve a common goal. The cross-functional team consists of people from different business units having expertise in different areas work together for a common goal. People from various departments such as operations, finance, development, production, stakeholders, and IT may be grouped to form cross-functional team. Assigning tasks to the cross-functional team increases the level of creativity. Where each person often provides different thought to the problem and a better solution to complete the task. The cross-functional team is often called as self-directed team because they obtain input and expertise of various departments for a specific task. Members of cross-functional team must be able to do multi-tasking, since they work simultaneously for the cross-functional team andto their own department work.

Physical Infrastructure

The physical infrastructure forms the foundation of a cloud infrastructure. It includes equipment such as compute systems, storage systems, and networking devices along with the operating systems, system software, protocols, and tools that enable the physical equipment to perform their functions. A key function of physical infrastructure is to execute the requests generated by the virtual and software-defined infrastructure, such as storing data on the storage devices, performing compute-to-compute communication, executing programs on compute systems, and creating backup copies of data.

The problem management process

The problem management process detects problems and ensures that the underlying root cause that creates a problem is identified. Incident and problem management, although separate service management processes, require automated interaction between them and use integrated incident and problem management tools. •Problem management team minimizes the adverse impact of incidents and problems causing errors in the cloud infrastructure. And initiates actions to prevent recurrence of incidents related to those errors. Problem handling activities may occur both reactively and proactively. •Reactive problem management: It involves a review of all incidents and their history for problem identification. It prioritizes problems based on their impact to business and consumers. It identifies and investigates the root cause that creates a problem and initiates the most appropriate solution and/or preventive remediation for the problem. If complete resolution is not available, problem management provides solutions to reduce or eliminate the impact of a problem. •Proactive problem management: It helps prevent problems. Proactive analysis of errors and alerts helps problem management team to identify and solve errors before the problem occurs. •Problem management is responsible for creating the known error database. After problem resolution, the issue is analyzed and a determination is made whether to add it to the known error database. Inclusion of resolved problems to the known error database provides an opportunity to learn and better handle future incidents and problems.

Cloud Service LifeCycle - Service Creation (phase 2)

The second phase of cloud service lifecycle, providers aim at defining services in the service catalog and creating workflows for service orchestration. Some of the common activities during service creation are defining service template, creating orchestration workflow, defining service offering, creating service contact.

The service catalog design and implementation process consists of a sequence of steps.

The service catalog design and implementation process consists of a sequence of steps. 1.Create service definition:Creating a definition for each service offering is the first step in designing and implementing the service catalog. A service definition comprisesservice attributes such as service name, service description, features and options, provisioning time,and price. The cloud portal software provides a standard user interface to create service definitions. The interface commonly provides text boxes, check boxes, radio-buttons, and drop-downs to make entries for the service attributes. 2.Define service request: Once service definition is created, define the web form used to request the service. The portal software includes a form designer for creating the service request form that consumers use to request the service. 3.Define fulfillment process: Onceservice request form is defined, the next step is to define the process that fulfills delivery of the service. Once the process is modeled, approved, and validated, it is implemented using workflows in the orchestrator. 4.Publish service: The final step is to publish the service catalog to the consumers. Before publishing, it is a good practice to perform usability and performance testing. After the service is published, it becomes available to consumers on the cloud portal.

Cloud Service LifeCycle - Service Operation (phase 3)

The service operation phase of the cloud service lifecycle comprises ongoing management operations by cloud administrators to maintain cloud infrastructure and deployed services,meeting or exceeding SLA commitments. Common activities in this phase arelisted in the slide.

The software-defined infrastructure(SDI) includes four distinguished planes

The software-defined infrastructure(SDI) includes four distinguished planes -data plane, control plane, management plane, and service plane.

software-defined infrastructure

The software-definedinfrastructure component is deployed either on the virtual or on the physical infrastructure. In the software-defined approach, all infrastructure components are virtualized and aggregated into pools. This abstracts all underlying resources from applications. The software-defined approach enables ITaaS, in which consumers provision all infrastructure components as services. It centralizes and automates the management and delivery of heterogeneous resources based on policies.

orchestration

Thekey function of orchestrationis to provide workflows for executing automated tasks to accomplish a desired outcome. Workflow refers to a series of inter-related tasks that perform a business operation. The orchestration software enables this automated arrangement, coordination, and management of the tasks. This helps to group and sequence tasks with dependencies among them into a single, automated workflow.

2 variants of private cloud

There are two variants of private cloud:on-premise and externally-hosted, as shown in the slide. The on-premise private cloud is deployed by an organization in its data center within its own premises. In the externally-hosted private cloud (or off-premise private cloud) model, an organization outsources the implementation of the private cloud to an external cloud service provider. The cloud infrastructure is hosted on the premises of the provider and may be shared by multiple tenants. However, the organization's private cloud resources are securely separated from other cloud tenants by access policies implemented by the provider.

People Transformation

There is increasing adoption of digital lifestyle,therefore every organization need to change the way they operate and interact with customers. Hence long-term strategies do not provide value in today's business, change is required. For this change, the key factor for digital transformation is cultural change in the organization. Cultural change is nothing but the operating style of the organization. The leaders of the business understand their organization's present culture and prepare solution for culture change and implements it. The existing workforce has the expertise in traditional IT infrastructure,skill set of these people need to be upskilled to adopt to cloud services. They should understand the needs of IT users, customers,and business users. This understanding helps to design, build, and manage the required cloud services that meet the business or consumer goals. Plan and provide different training programs to the staff to enhance their skill. Cloud adoption creates new roles in the organization to create and provide cloud services. Further cross-functional teams must be created to meet business goals, where people from different business units having expertise in different areas work together. Other skills that need to be focused on is collaboration and communication that helps to find innovative ideas to meet business goals.

Application driven business innovation have in common

They focus on compelling user experiences and responsive design -They make strategic use of big data -to improve service and experience through deep instrumentation -And they all bring new services to market quicker and more often than entrenched competitors -They use cloud-based infrastructure to develop and deploy their applications to meet the business demand

Thin LUN:

Thin LUNs do not require physical storage to be allocated to them at the time they are created and presented to a compute system. From the operating system's perspective, a thin LUN appears as a traditional LUN. Thin LUNs consume storage as needed from the underlying storage pool in increments called thin LUN extents. The thin LUN extent defines the minimum amount of physical storage that is consumed from a storage pool at a time by a thin LUN. When a thin LUN is destroyed, its allocated capacity is reclaimed to the pool. Thin LUNs are appropriate for applications that can tolerate performance variations. Thin LUNs provide the best storage space efficiency and are suitable for applications where space consumption is difficult to forecast. Using thin LUNs, cloud service providers can reduce storage costs and simplify their storage management.

Relationship Among Security Concepts

Threat Agent (Attacker) Threat Vulnerabilities Risk Asset

Traditional Application: Challenges

Tight coupling of components Complex traditional software development processes Unable to quickly respond to new business opportunities Linear Software performance growth Competitive solutions are gaining market share

Traditional IT management:

Traditional IT management: Traditionally, IT management is infrastructure element or asset such as compute, storage, and business application specific. The management tools by IT asset vendors only enable monitoring and management of specific assets. A large environment composed of many multivendor IT assets residing in world-wide locations raises management complexity and asset interoperability issues. Further, traditional management processes and tools may not support a service-oriented infrastructure. Specially if the requirement is to meet on-demand service provisioning, rapid elasticity, workflow orchestration, and sustained service levels. Even if traditional management processes enable service management, they are implemented separately from service planning and design; which may lead to gaps in monitoring and management.

Transient unavailability:

Transient unavailability: It occurs once for short time and then disappears. For example, an online transaction times out but works fine when a user retries the operation.

Name 4 influences of Digital Transformation

Uber has disrupted the transportation industry without owning any vehicles themselves. Facebook has changed the rules for the media industry without owning any content. Alibaba has rewritten the rules of e-commerce, and they do not own any inventory. Similarly Airbnb is threatening established hotel chains without owning any real estate themselves.

VENOM

VENOM, Virtualized Environment Neglected Operations Manipulation, a security vulnerability existed in the virtual floppy disk controller of open source virtualization package Quick Emulator (QEMU). QEMU is used in the other virtualization infrastructures like the Xen hypervisors and Kernel-based Virtual Machine (KVM). QEMU and other affected vendors have created and distributed patches for this bug. Note: VENOM is a security vulnerability affecting the virtualization platforms. This vulnerability allows an attacker to access the host system along with other VMs running on the system, by escaping a guest virtual machine to steal the sensitive data on VMs.

VMware FT

VMware FT provides continuous availability of applications running on VMs in the event of failure ofthehost compute system. It creates a live shadow copy of a VM which is always up-to-date with the primary VM. It enables automatic failover between the two VM instances in the event of a host compute system outage. By allowing instantaneous failover, FT eliminates even the smallest chance of data loss or disruption.

VMware HA

VMware HA provides high availability for applications running on VMs. In the event of a host compute system failure, the affected VMs are automatically restarted on other compute systems. VMware HA minimizes unplanned downtime and IT service disruption while eliminating the need for dedicated standby hardware and installation of additional software.

VMware vCloudAir Disaster Recovery

VMware vCloudAir Disaster Recovery is a DRaaSoffering owned and operated by VMware, built on vSphere Replication and vCloudAir -a hybrid cloud platform for infrastructure-as-a-service (IaaS). Disaster Recovery leverages vSphere Replication to provide robust, asynchronous replication capabilities at the hypervisor layer. Thisapproach towards replication helpsin easy configuration ofvirtual machines in vSphere for disaster recovery, without depending on underlying infrastructure hardware or data centermirroring. Per-virtual-machine replication and restore granularity further provide the ability to meet dynamic recovery objectives without overshooting the actual business requirements for disaster recovery as they change.

VMware vRealize Operations i

VMware vRealize Operations integrated with vRealize Log Insight and vRealize Business for Cloud -provides unified monitoring of SDDC and multicloudenvironments.

Velocity-of-attack

Velocity-of-attack refers to a situation where an existing security threat in a cloud may spread rapidly and have large impact. A cloud infrastructure typically has many compute, storage, and network components spanning geographic boundaries. The typical cloud environment also features homogeneity and standardization in the platforms and components the service provider has employed, such as hypervisors, virtual machines file formats, and guest operating systems. These factors can amplify security threats and allow them to spread quickly. Mitigating velocity-of-attack is difficult in a cloud environment. Due to potentially high velocity-of-attack, providers must employ strong and robust security enforcementstrategies like defense-in-depth.

Virtual Private network (VPN):

Virtual Private network (VPN): In the cloud environment, VPNcan be used to provide a consumer a secure connection to the cloud resources.VPN is a technology usedto create a secure network connection between two locations using the public network such as Internet. VPN establishes a point-to-point connection between two networks over which encrypted data is transferred. VPN enables consumers to apply the same security and management policies to the data transferred over the VPN connection, as applied to the data transferred over the consumer's internal network. VPN connection canbe established using two methods: •In remote-site VPN, a remote customerinitiates a remote VPN connection request using aVPN client software installed onsystem. A VPN clientencrypts the traffic before sending it over Internet. VPN gateway at the provider's site decrypts the traffic and sends the information to the desired host in the provider's data center. •In site-to-site VPN, theIPsec protocol creates an encrypted tunnel from provider's site to the customer's site. It allows customer organizations at multiple different locations to establish a secure connectionwith each other over Internet.

Virtual SAN (VSAN):

Virtual SAN (VSAN): A virtual SAN (VSAN) or virtual fabric is a logical fabric created on a physical FC or FCoESAN. A VSAN enables communication between a group of nodes with a common set of requirements, independent of their physical location in the fabric. A VSAN function conceptually in the same way as a VLAN. Each VSAN behaves and is managed as an independent fabric. Each VSAN has its own fabric services, configuration, and set of FC addresses.

Virtual Machine Hardening

Virtual machines provided by the cloud service provider run many critical business applications of the customers. So, these VMs need to be secured to protect the applications and the sensitive data. Virtual machine hardening is a key security mechanism used to protect virtual machines from various attacks. Typically, a VM is created with several default virtual components and configurations. VM hardening process includes changing the default configuration of the VM to achieve greater security as the default configurations may be exploited by an attacker to carry out an attack. Also, disable and disconnect the virtual components that are not required to support the application running on it. Ensure that the security mechanisms deployed to protect the VM are enabled, and are up-to-date by installing the appropriate patches or upgrades. Isolate the VM network using VLANs so that the risks of one VM do not affect the VMs in the other network. Hardening is highly recommended when creating virtual machine templates so that, the virtual machines created from the template start from a known security baseline.

VirtualExtensible LAN (VXLAN):

VirtualExtensible LAN (VXLAN): A VXLAN is an OSI Layer 2 overlay network built on anOSI Layer 3 network. An overlay network is a virtual network that is built on top of existing network. VXLANs unlike stretched VLANs are based on LAN technology. VXLANs use the MAC Address-in-User Datagram Protocol (MAC-in-UDP) encapsulation technique. VXLANs make it easier for administrators to scale a cloud infrastructure while logically isolating the applications and resources of multiple consumers from each other. VXLANs also enable VM migration across sites and over long distances.

VLAN

VirtualLAN (VLAN): A virtual LAN (VLAN) is a virtual network consisting of virtual and/or physical switches, which dividea LAN into smaller logical segments. A VLAN groups the nodes with a common set of functional requirements, independent of the physical location of the nodes. In a multitenant cloud environment, the provider typically creates and assigns a separate VLAN to each consumer. It provides a private network and IP address space to a consumer, and ensures isolation from the network traffic of other consumers.

Virtualization

Virtualization is the process of abstracting physical resources such as compute, storage, and network, and making them appear as logical resources.

Virtualization offers several benefits when deployed to build a cloud infrastructure

Virtualization offers several benefits when deployed to build a cloud infrastructure. It enables consolidation of IT resources that helps service providers to optimize their infrastructure resource utilization. Improving the utilization of IT assets can help service providers to reduce the costsassociated with purchasing new hardware. It also reduce space and energy costs associated with maintaining the resources. Moreover, less people are required to administer these resources, which further lowers the cost. Virtual resources are created using software that enables service providers to deploy infrastructure faster, compared to deploying physical resources. Virtualization increases flexibility by allowing to create and reclaim the logical resources based on business requirements.

Virtualizationis

Virtualizationis the process of abstractingphysical resources,such as compute, storage, and network, and creatingvirtual resources from them. Virtualization is achieved through the use of virtualization software that is deployed on compute systems, storage systems, and network devices. Virtualization software aggregates physical resources into resource pools from which itcreates virtual resources.A resource pool is an aggregation of computing resources, such as processing power, memory, storage, and network bandwidth. For example, storage virtualization software pools thecapacity of multiple storage systems to create a single large storage capacity. Similarly, compute virtualization software pools the processing power and memory capacity of a physical compute system to create an aggregation of the power of all processors (in megahertz) and all memory (in megabytes).

Web-based restore:

Web-based restore: The requested data is gathered and sent to the server, running cloud backup agent. The received data is in an encrypted form. The agent software on the server decrypts the files and restores it on the server. This method is considered if sufficient bandwidth is available to download large amounts of data or if the restore data is small in size.

A hypervisor has two key components,

as the kernel of any OS, including process management, file system management, and memory management. It is designed and optimized to run multiple VMs concurrently. It receives requests for resources through the VMM, and presents the requests to the physical hardware. Each virtual machine is assigned a VMM that gets a share of the processor, memory, I/O devices, and storage from the physical compute system to successfully run the VM. The VMM abstracts the physical hardware, and appears as a physical compute system with processor, memory, I/O devices, and other components that are essential for an OS and applications to run. The VMM receives resource requests from the VM, which it passes to the kernel, and presents the virtual hardware to the VM. There are two types of hypervisors. Theyare bare-metal hypervisor and hosted hypervisor. •Bare-metal hypervisor: It is also called asType 1 hypervisor. Itis directly installed on topof the system hardware without any underlying operating system or any other software. It is designed mainly for enterprise data center. Few examples of bare-metal hypervisor are Oracle OVM for SPRAC, ESXi, Hyper-V, and KVM •Hosted hypervisor:It is also called as Type 2 hypervisor. It is installed as an application or software on an operating system. In this approach, the hypervisor does not have direct access to the hardware. All requests must pass through the operating system running on the physical compute system. Few examplesof hosted hypervisor are VMware Fusion, Oracle Virtual Box, Solaris Zones, and VMware Workstation.

asynchronous remote replication

asynchronous remote replication, a write from a production compute system is committed to the source and immediately acknowledged to the compute system. Asynchronous replication also mitigates the impact to the application's response time because the writes are acknowledged immediately to the compute system. This enables to replicate data over distances of up to several thousand kilometers between the source site and the secondary site (remote locations). In this replication, the required bandwidth can be provisioned equal to or greater than the average write workload. In asynchronous replication,compute system writes are collected into buffer (delta set) at the source. This delta set is transferred to the remote site in regular intervals. Therefore, adequate buffer capacity should be provisioned to perform asynchronousreplication. Somestorage vendors offer a feature called delta set extension, which allows to offload delta set from buffer (cache) to specially configured drives. This feature makes asynchronous replication resilient to the temporary increase in write workload or loss of network link. In asynchronous replication, RPO depends on the size of the buffer, the available network bandwidth, and the write workload to the source. This replication can take advantage of locality of reference (repeated writes to the same location). If the same location is written multiple times in the buffer prior to transmission to the remote site, only the final version of the data is transmitted. This feature conserves link bandwidth.

The three main types of virtualization are:

compute virtualization, storage virtualization, and network virtualization.

Dell EMC MyService360

c is a new cloud-based, service-centric application with personalized 360-degree data visualizations. It gives deep visibility into the health and wellness of the global Dell EMC environment. It is powered with the secure Dell EMC Data Lake. This new feature in Online Support takes the guesswork out of monitoring the IT environment and empowers to take control of the service experience. MyService360 modernizes and simplifies the customer service experience. The advanced visualizations deliver: •Improved IT health andrisk management with proactive analytics •Transparency and actionable intelligence across your global install base •Increased efficiencies from a simplified, holistic services experience

What is Cloud Service Brokerage?

entity that manages the use, performance, and delivery of cloud services, and negotiates relationships between cloud providers and cloud consumers. A cloud consumer may request cloud services from a cloud broker, instead of contacting a cloud provider directly. The cloud broker acts as an intermediary between cloud consumers and providers, helps the consumers through the complexity of cloud service offerings, and may also create value-added cloud services.

Dell EMC Cloud for Microsoft Azure Stack

is a hybrid cloud platform for delivering infrastructure and platform as a service with a consistent Microsoft Azure experience on-premises and in the public cloud. Organizations can now build modern applications across hybrid cloud environments, balancing the right amount of flexibility and control. Dell EMC Cloud for Microsoft Azure Stack provides a fast and simple path to digital and IT transformation. It is engineered with best in class hyperconvergedinfrastructure, networking, backup and encryption from Dell EMC, along with application development tools from Microsoft.

VMware vRealize Automation

is a part of VMware vRealizeSuite, empowers IT to accelerate the provisioning and delivery of IT services, across infrastructure, containers, applications and custom services. Leveraging the extensible framework provided by vRealizeAutomation, you can streamline and automate the lifecycle management of IT resources from initial service model design, through Day One provisioning and Day Two operations.Some of the key benefits of the vRealize automation software areagility, extensibility, control, choice for developers and complete life cycle management.

Bring Your Own Device (BYOD)

is a policy at workplaces where, employees bring their own mobile devices and connect them to the corporate network. This policy has created considerations for security and privacy of data. These mobile devices may have varied mobile service providers and varied operating systems,and employees access their mails and other important data related to the organization using their devices. If these devices are lost or if the sensitive information leaks out, it would create potential threats to the organization. These concerns give raise to mobile device management. MobileDevice Management (MDM) is a security solution foran IT department to monitor, manage,and secure the employees mobile devices like smartphones, laptops,and other devices that are being used in the workplace.

Secure multitenancy

is achieved by mechanisms that prevent any tenant from accessing another tenant's information. It requires mechanisms and rigid security policies that prevent one tenant's process from affecting another tenant's process. The diagram depicts that the isolation in multitenancy can be physical where VM created on a particular server is dedicated for a particular tenant. Also, it can be virtual, where VM created on a server is shared between multiple tenants. Both cloud service providers and consumers, need to understand and address the security implications due to multitenancy, for the layers and components that they are responsible.

VMware vRealize Orchestrator

is an drag-and-drop workflow software that simplifies the automation of complex IT tasks. This orchestration software helps to automate provisioning and operational functions in a cloud infrastructure. It comes with a built-in library of predefined workflows and a drag-and-drop feature for linking actions together to create customized workflows. These workflows can be launched from VMware vSphere client, from VMware vCloud Automation Center or through various triggering mechanisms. vRealize Orchestrator can execute hundreds or thousands of workflows concurrently. vRealize Orchestrator can be installed as a virtual appliance or on a Windows, Linux, and MAC OS. The vRealize Orchestrator virtual appliance significantly reduces the time and skill that are required to deploy. Also vRealize Orchestrator provides a low-cost alternative to the traditional Windows-based installation.

Payment Card Industry Data Security Standards (PCI DSS)

is an information security standard for theorganizations that handle credit card data. This standard was introduced to reduce credit card fraud. The validation of compliance is provided by Self-Assessment Questionnaire (SAQ). It is compliant because the cloud provider's systems do notever store or transmit the PCI-related data, and access is restricted. It is redirected directly from the user's browser to the payment card provider.

Pivotal Cloud Foundry

is an open source Platform as a Service project . Cloud Foundry is written primarily in the Ruby language and its source is available under Apache License 2.0. It allows developers to develop and deploy applications without being concerned about issues related to configuring and managing the underlying cloud infrastructure. It supports multiple programming languages and frameworks including Java, Ruby, Node.js, and Scala. It also supports multiple database systems including MySQL, MongoDB, and Redis.

malicious insider

malicious insider could be an organization's, cloud service provider's current or former employee, contractor, or other business partner who has or had authorized access to a cloud service provider's compute systems, network, or storage. These malicious insiders may intentionally misuse that access in ways that negatively impact the confidentiality, integrity, or availability of the cloud service provider's information or resources. Thisthreat can be controlled by having a strict access control policies, disable employee accounts immediately after separation from the company, security audit, encryption, and segregation of duties policies. Also, a background investigation of a candidate before hiring is another key measure that can reduce the risk due to malicious insiders.

DELL EMC VPLEX

provides solution for block-level storage virtualization and data migration both within and across data centers. It forms a pool of distributed block storage resources and enables creating virtual storage volumes from the pool. These virtual volumes are then allocated to the compute systems. VPLEX provides non-disruptive data mobility among storage systems to balance the application workload and enable both local and remote data access. VPLEX also provides the capability to mirror data of a virtual volume both within and across locations. It uses a unique clustering architecture and advanced data caching technique that enable multiple compute systems located across two locations to access a single copy of data. VPLEX Virtual Edition (VPLEX/VE) is deployed as a set of virtual appliances that implement VPLEX technology on VMware ESXiinfrastructure. VPLEX/VE stretches ESXiinfrastructure over distance, allowing an ESXicluster to span across two physical sites.

What does cloud computing enable?

unlimited & dynamic IT resources reduces costs facilitate rapid business change

Weak Identity

use of weak passwords or failure to use multifactor authentication or lack of cryptographic key management. Identity and access management systems must scale to handle the identities and access permissions of millionsof users throughout the service lifecycle in a cloud environment. Credentials and cryptographic keys must be rotated periodically as per the organization policies. Strong password must be used, and themultifactorauthentication mechanism is recommended for both providers and consumers as it prevents from password theft

Block Storage:

•A block-based storage system provides compute systems with block-level access to the storage volumes. In this environment, the file system is created on the compute systems and data is accessed on a network at the block level. The block-based storage system consists of one or more controllers and storage. •In a cloud environment, cloud block storage (CBS) service provides a logical volume to the consumers based on their requirements.

File Storage:

•A file-based storage system, also known as Network-Attached Storage (NAS), is a dedicated, high-performance file server having either integrated storage or connected to external storage. NAS enables clients to share files over an IP network. NAS supports NFS and CIFS protocols to give both UNIX and Windows clients the ability to share files using appropriate access and locking mechanisms. NAS systems have integrated hardware and software components, including a processor, memory, NICs, ports to connect and manage physical disk resources, an OS optimized for file serving, and file sharing protocols. A NAS system consolidates distributed data into a large, centralized data pool accessible to, and shared by, heterogeneous clients and application servers across the network. Consolidating data from numerous and dispersed general-purpose servers onto NAS results in more efficient management and improved storage utilization. Consolidation also offers lower operating and maintenance costs. •In a cloud environment, Cloud File Storage (CFS) service is used. This service is delivered over the internetand is billed on a pay-per-use basis.

The components of physical infrastructure are:

•Compute System: A compute system is a computing device (combination of hardware, firmware, and system software) that runs business applications. Examples of compute systems include physical servers, desktops, laptops, and mobile devices. The term compute system refers to physical servers and hosts on which platform software, management software, and business applications of an organization are deployed. •Storage System: Data created by individuals, businesses, and applications need to be persistently stored so that it can be retrieved when required for processing or analysis. A storage system is the repository for saving and retrieving data and is integral to any cloud infrastructure. A storage system has devices, called storage devices (or storage) that enable the persistent storage and the retrieval of data. Storage capacity is typically offered to consumers along with compute systems. Apart from providing storage along with compute systems,a provider may also offer storage capacity as a service (Storageas a Service), which enables consumers to store their data on the provider's storage systems in the cloud. •Network System: It establishes communication paths between the devices in an IT infrastructure. Devices that are networked together are typically called nodes. A network enables information exchange and resource sharing among manynodes spread across different locations. A network may also be connected to other networks to enable data transfer between nodes. Cloud providers typically leverage different types of networks supporting different network protocols and transporting different classes of network traffic.

The cloud service component has three key functions which are as follows:

•Enables defining services in a service catalog: Cloudservice providers should ensure that the consumers are able to view the available services, service level options, and service cost. This service definitionhelps service providerto make the right choice of services effectively. Cloud services are defined in a service catalog, which is a menu of services offerings from a service provider. The catalog provides a central source of information on the service offerings delivered to the consumersby the provider. •Enables on-demand, self-provisioning of services: Aservice catalog also allows a consumer to request or order a service from the catalog.The service catalog matches the consumer's need without manual interaction with a service provider. While placing a service request, a consumer commonly submits service demands, such as required resources, needed configurations, and location of data. Once the provider approvethe service request, appropriate resources are provisioned for the requested service. •Presents cloud interfaces to consume services: Cloud interfaces are the functional interfaces and the management interfaces of thedeployed service instances. Usingthese interfaces,the consumers perform computing activities, such as executing a transaction and administer their use of rented service instances. Some of the examples of rented service instances such as modifying, scaling, stopping, or restarting aserviceinstance.

The core attributes of modern infrastructure architecture are:

•Flash: Data dense, highly performing flash storage reduces the cost of delivering consistent performance while reducing the number of drives required. Flash delivers the low-latency performance for next-generation applications and increased performance for traditional applications with better economics than disk drives. With flash drive deployment, organizations cansignificantly reduce the floor space, power, and cooling requirements needed to deliver storage services. •Scale-Out: The scale-out architecture pools multiple nodes together in a cluster. It provides the capability to scale its resources by simply adding nodes to the cluster. By designing these systems to scale as a single managed scale-out system, IT can efficiently manage massive capacities with few resources. •Software Defined: Software-defined on commodity hardware is a cost-efficient way to support massive data volumes. Further, a software-defined approach allows organizations to automate the configuration and deployment of IT services. The automated configuration reduces total cost of ownership, increases business agility, and provides a programmable approach to manage IT services. •Cloud Enabled: Provides the capability to deploy and manage applications and services beyond an organization's data center. Specifically, cloud-enabled infrastructure enables back and forth mobility of applications and data between a public cloud and an organization's data center as required. It also helps in deploying private and hybrid clouds in the organization's own data center. Cloud extensibility increases business agility and reduces the burden of procuring and reserving capacity for peak hours and associated cost and management complexity.

Infrastructure-centric data protection:

•Infrastructure-centric data protection: Infrastructure-centric data protection solutions are not based on data backup alone, but on data protection as a whole that also includes data replication and dataarchiving. In addition, the data protection features embeddedin various IT infrastructure components are leveraged. The IT infrastructure components such as storage systems and OS protect data themselves, instead of requiring a separate protection infrastructure. As this wave of change takes advantage of the component-specific intelligent features for data protection, the recovery of data is much faster compared to the server-centric backup. It also enables a centralizedmanagement for protecting data across multiple data centers.

Blade Compute System:

•It is also known as a blade server. It is an electronic circuit board containing only core processing components.The components are processors, memory, integrated network controllers, storage drive, and essential I/O cards and ports. Each blade server is a self-contained compute system and is typically dedicated to a single application. •Examples: HP-ProLiant BL460c G7 and Dell PowerEdge M series. •Advantages: •Simplifies cabling and reducespower consumption by pooling, sharing, and optimizing the requirements across all the blade servers. •Increases resiliency, efficiency, dynamic load handling, and scalability. •Disadvantages: •High density of the system. •Not suitable for applications that requireless than five to 10 servers.

Tower Compute System

•It is the hardware that is built in an upright cabinet that stands alone. The cabinet is also known as a tower. These tower servers work simultaneously to perform different tasks and processes. Since, the tower servers are standalone, they can be added as many, according tothe business requirements. •Examples: Standard tower PCs such as Dell Precision T7500. •Advantages: •The density of the component is low, hence it is easier for cooling down the system. •It is highly scalable, due to its stand-alone nature. •The maintenance factor is low. •The data are stored in a single tower and not across various devices,which makes the identification easier across the network and the physical component. •Disadvantages: •A set of tower servers are bulkyand heavy. •Cabling for large set of tower servers iscomplicated. •Group of tower servers together is noisy, because each server requires a dedicated fan for cooling down.

Rack-Mounted Compute System:

•It is the hardware that is placed in a downright horizontal rack. The rack contains multiple mounting slots called as bays. Each bay holds a hardware unit in a rack. These types of servers collectively hosts, executes, and manages an application. Typically, a console with a video screen, keyboard, and mouse is mounted on a rack to enable administrators to manage the servers in the rack. •Examples: Dell PowerEdge R320, R420, and R520. •Advantages: •Simplifies network cabling. •It saves physical floor space and other server resources. •The horizontal rack chassis can simultaneously hold multiple servers placed above each other. •Disadvantages: •Since the rack mount server is in horizontal form, it requires dedicated processor, motherboard, storage, and other input and output resources. •Each rackmount server can work independently, but it requires the power, cooling, and mounting support from the underlying chassis.

Storage Device Types

•Magnetic Tape Drive: It is a storage device that uses magnetic tape as the storage medium. It is a thin, long strip of plastic film that is coated with a magnetizablematerial. The tape is packed in plastic cassettes and cartridges. It provides linear sequential read and write data access. Organizations use this device to store large amounts of data, data backups, offsite archiving, and disaster recovery. •Magnetic Disk Drive: It is a primary storage device that uses magnetization process to write, read, and access data. It is covered with a magnetic coating and stores data in form of tracks and sectors. Tracks are the circular divisions of the disk and are further divided into sectors that contain blocks of data. All read and write operations on the magnetic disk are performed on the sectors. Hard disk drive is a common example of magnetic disks. It consists of a rotating magnetic surface and a mechanical arm that circulates over the disk. The mechanical arm is used to read from the disk and to write data to the disk. •Solid-State Drive (SSD): It uses semiconductor-based memory, such as NAND and NOR chips, to store data. SSDs, also known as "flash drives", deliver the ultrahigh performance required by performance-sensitive applications. These devices, unlike conventional mechanical disk drives, contain no moving parts and therefore do not exhibit the latencies associated with read/write head movement and disk rotation. Compared to other available storage devices, SSDs deliver a relatively higher number of input/output operations per second (IOPS) withlow response times. They also consume less power and typically have a longer lifetime as compared to mechanical drives. However, flash drives do have the highest cost per gigabyte ($/GB) ratio. •Optical Disk Drive: It is a storage device that uses optical storage techniques to read and write data. It stores data digitally by using laser beams, which is transmitted from a laser head mounted on an optical disk drive to read and write data. It is used as a portable and secondary storage device. Common examples of optical disk drive are compact disks (CD), digital versatile/video disks (DVD), and Blu-ray disks.

Object Storage:

•Object-based storage is a way to store file data in the form of objects. It is based on the content and other attributes of the data rather than the name and location of the file. An object contains user data, related metadata, and user defined attributes of data. The additional metadata or attributes enable optimized search, retention, and deletion of objects. For example, an MRI scan of a patient is stored as a file in a NAS system. The metadata is basic and may include information such as file name, date of creation, owner, and file type. The metadata component of the object may include additional information such as patient name, ID, attending physician's name, and so on, apart from the basic metadata. •A unique identifier known as object ID identifies the object stored in the object-based storage system. The object ID allows easy access to objects without having to specify the storage location. The object ID is generated using specialized algorithms such as a hash function on the data. It guarantees that every object is uniquely identified. Any changes in the object, like user-based edits to the file, results in a new object ID. It makes object-based storage a preferred option for long-term data archiving to meet regulatory or compliance requirements. The object-based storage system uses a flat, nonhierarchical address space to store data, providing the flexibility to scale massively. Cloud service providers use object-based storage systems to offer Storage as a Service because of its inherent security, scalability, and automated data management capabilities. Object-based storage systems support web service access via REST and SOAP.

The major components of a compute system are:

•Processor: It is also known as a Central Processing Unit (CPU). It is an integrated circuit (IC) that executes the instructions of a software program by performing fundamental arithmetical, logical, and input and output operations. A common processor and instruction set architecture is the x86 architecture with 32-bit and 64-bit processing capabilities. Modern processors have multiple cores, each capable of functioning as an individual processor. •Random Access Memory (RAM): It is also called as main memory. It is a volatile data storage which stores the frequently used software program instructions. It allows data items to be read or written in almost the same amount of time, there by increasing the speed of the system. •Read Only Memory (ROM): It is a type of semiconductor memory and a nonvolatile memory. It contains the boot firmware, power management firmware, and other device-specific firmware. •Motherboard: It is a printed circuit board (PCB) to which all compute system components are connected. It holds the major components like processor and memory, to carry out the computing operations. A motherboard consists ofintegrated components, such as a graphics processing unit (GPU), a network interface card (NIC), and adapters to connect to external storage devices. •Operating System (OS): It is a system software that manages the systems hardware and software resources. It also controls the execution of the application programs and internal programs that run on it. All computer programs, except firmware, requires an operating system to function.It also acts as an interface between the user and the computer.

Workflow Modeling

•Start:It is the starting point of a workflow. A workflow can have only one start element. •Action:It is an activity that is executed by calling an API function. It takes one or more input parameters and returns a value. Multiple activities can be executed in sequence or simultaneously. •Manualinteraction: It prompts a user for input. An architect can set a timeout period within which a user can answer. •Condition:It consists of conditional branches. It takes one or more input parameters and returns either true or false. It allows a workflow to branch into different directions, depending on the input parameters. •Waiting time: Wait for a given time period or until a certain date/time has passed at which a workflow resumes running. •Waiting event: Wait for a specific event to resume workflow running. •Child workflow:A workflow can be hierarchical and can contain a child workflow. •End:The end point of a workflow.

•Stretched VLAN:

•Stretched VLAN: A stretched VLAN is a VLAN that spans across multiple sites over a WAN connection. It extends a VLAN across the sites. It enables nodes in the two different sites to communicate over a WAN as if they are connected to the same network. Stretched VLANs also allowthe movement of VMs between sites without having to change their network configurations. It enables the creation of high-availability clusters, VM migration, and application and workload mobility across sites.


Conjuntos de estudio relacionados

Perception (Chapter 3- Test Questions)

View Set

The Official CompTIA Linux+ Student Guide Exam XK0-004 Lesson 3

View Set

Advanced Google Analytics (Assessment 2)

View Set

Chapter 5 - Existence and Proof by Contradiction

View Set

Conceptual physics chapter 4 4.2 study guide

View Set