Cloud Concepts and Knowledge

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

Google Kubernetes Engine

Managed environment for running containerized applications

What is "EUCALYPTUS" in the context of cloud computing?

"EUCALYPTUS" stands for "Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems", which is an open source cloud computing infrastructure that is used for deploying cloud clusters. Using the "EUCALYPTUS", you can build public, private and hybrid cloud platforms. You can even have your own data center in the cloud and this can be used to harness its functionality in your organization.

Google Technical Deep Dive Assessment for Cloud Migration

1. Assess your current cloud maturity 2. Ask yourself how far you want to go 3. Plan your cloud adoption program 4. Find the right workloads

Describe Google Cloud Adoption Framework

1. Build on rubric of People, Process, Technology structure (Upskilling, External Experience, Communication, Behaviors, Resource Management, Identity and Access, Data Management, Sponsorship, Teamwork) 2. Four themes - Learn (up skill technical teams, augment with experienced partners), Lead (Exec Management support, team collaboration, cross-functional, self-motivated), Scale (services, workloads, application updates, templates), Secure (controls in place, technologies used, strategies governing the whole, fear of public internet but trust in private internet, trust only the right people and devices, central identity hybrid network) 3. Three phases - Tactical (individual workloads in place, quick ROI, focus is on reducing cost of discrete systems, quick wins but no provision to scale or strategize), Strategic (broader vision governs workloads designed with an eye to scale and meet future needs, people and process portion of the equation are now involved), Transformational (CI/CD, Infrastructure as a Code, Existing data is transparently shared. New data is collected and analyzed. The predictive and prescriptive analytics of machine learning applied. Your people and processes are being transformed, which further supports the technological changes. IT is no longer a cost center, but has become instead a partner to the business).

Cloud Migration Steps

1. Is not a single giant step, it is a journey. 2. Think about what business value you want to derive and put together a strategy/business case (it is not just about getting into cloud, think about what you want to do once you get there) 3. Discovery and Assessment (identify the right workloads, map on a quadrant with x axis as ease and difficulty and y axis as value return) 4. Cloud Migration (heavy lifting - migrate individual workloads, app modernization, architecture and infrastructure transformation)

GCS - countries, edge nodes, AZ, regions

200, 144, 73, 24

AWS Well Architected Framework

A component is the code, configuration and AWS Resources that together deliver against a requirement. A component is often the unit of technical ownership, and is decoupled from other components. • We use the term workload to identify a set of components that together deliver business value. The workload is usually the level of detail that business and technology leaders communicate about. Milestones mark key changes in your architecture as it evolves throughout the product lifecycle (design, testing, go live, and in production). • We think about architecture as being how components work together in a workload. How components communicate and interact is often the focus of architecture diagrams. • Within an organization the technology portfolio is the collection of workloads that are required for the business to operate.

Tags

A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value both of which you define. Let us understand a simple use case. There are three EC2 instances running. How will you identify them? Payment-app, email-server etc...Not all AWS services support tagging. Max number of tags per resource is 50. Tag keys and values are case-sensitive. Tags can be integrated in Billing and that allows customers to understand their AWS bill in a granular way. Tags can be used with IAM to control the access to resources. Example use case: allow Alice to only delete resources which has a tag of env as development. It is important to have a consistent and effective tagging strategy. Example: Alice creates a resource with at tag of key of env and value of production. Matthew creates a resource with a tag of Env and value of Production. In this case overall access control will become difficult to manage. Follow AWS tagging strategy document. Tags for access control, cost allocation etc....

Autoscaling

Ability to change in size depending upon the needs. Infrastructure should scale to support changing in traffic patterns. If average CPU utilization > 60% add two more instances. If average CPU utilization < 30% remove two instances. Types of scaling - scheduled scaling, dynamic scaling, predictive scaling

Google Cloud Next OnAir

Accelerate your digital transformation through a modern infrastructure; Global Response to Pandemic; Forcing to think differently; Remote work/services availability; Elasticity/Scalability; Hierarchical Firewall (top and folder level controls); Confidential VMs (encrypt data in processing in addition to in rest and transit); Public Sector Claims & phone calls NY Commissioner; App Centric approach to connect to services vs infrastructure approach; Cloud CDN - supports serving content from on-prem and other clouds; RAMP - rapid assessment and migration program; Efficient VMs - cost optimized; Connectivity - Cisco/Google Hybrid Expanded partnership; Filestore High Scale - IOPS, Throughput, Storage Capacity

Connecting to a newly launched server in Cloud

After you have launched a server on cloud, you need to connect to it to perform administrative tasks. Depending on OS of the server, tools to connect will change. For Linux server you will need a SSH client to connect. For Windows server, you will need a RDP client to connect.

Explain Cloud Computing and benefits

Allows on-demand network access to shared computing resources, allows managing, storing and processing data online via internet, allows scalability (scale up and down as needed), elasticity, pay for what you use, no server space required, no experts required for hardware and software maintenance, better data security, disaster recovery, high flexibility, automatic software updates, teams can collaborate from widespread locations, data can be shared and accessed anywhere over the internet, rapid implementation.

Cloud Providers

Amazon AWS, Microsoft Azure, Google Cloud Platform, VMware, IBM Cloud, Digital Ocean OVH is one of the popular server hosting companies

Cloud Governance - How to establish a Cloud Governance Program #5

An organization will typically be using cloud services already, and looking for new ones, before it launches a formal cloud governance program. The impetus to establish the program is often in response to problems related to operating or procuring services. One can organize the project scope, workstreams and timelines for a governance program launch as follows to meet the active needs of the enterprise and prepare for a future state: • Identify and engage the roles that will participate in the governance program. • Set the scope of the program charter broadly, to address future needs. Define project scope tightly, deliver program elements incrementally. • Organize separate project workstreams to assess what is in use and what is in play. Harmonize after immediate business needs are met. • Set immediate, minimum standards for monitoring and reporting. Evolve to meet goals of a comprehensive view of compliance status and alerts across the organization. • Establish a cycle of communication from the nascent governing body to management and the organization while the program is being formed. • Coordinate deliverables with deadlines for audits or compliance, especially if cloud governance is needed in response to known deficiencies. It is also essential to the success of the program that: • Resources are officially allocated to establish and sustain the program. Effective governance is not established or sustained in the margins. • Ownership and accountability for running and sustaining the program reside within the organization, not with third parties. • Communications and reports can be efficiently consumed and used. The work of the governance program must be recognized and trusted as correct, current, and authoritative.

Kubernetes - Container Orchestration with Kubernetes

As companies began embracing containers—often as part of modern, cloud-native architectures—the simplicity of the individual container began colliding with the complexity of managing hundreds (even thousands) of containers across a distributed system. To address this challenge, container orchestration emerged as a way managing large volumes of containers throughout their lifecycle, including: Provisioning Redundancy Health monitoring Resource allocation Scaling and load balancing Moving between physical hosts While many container orchestration platforms (such as Apache Mesos, Nomad, and Docker Swarm) were created to help address these challenges, Kubernetes, an open source project introduced by Google in 2014, quickly became the most popular container orchestration platform, and it is the one the majority of the industry has standardized on. Kubernetes enables developers and operators to declare a desired state of their overall container environment through YAML files, and then Kubernetes does all the hard work establishing and maintaining that state, with activities that include deploying a specified number of instances of a given application or workload, rebooting that application if it fails, load balancing, auto-scaling, zero downtime deployments and more. Kubernetes is now operated by the Cloud Native Computing Foundation (CNCF) which is a vendor-agnostic industry group under the auspices of the Linux Foundation. As containers continue to gain momentum as a popular way to package and run applications, the ecosystem of tools and projects designed to harden and expand production use cases continues to grow. Beyond Kubernetes, two of the most popular projects in the container ecosystem are Istio and Knative.

Availability Zones and Regions

Availability Zones (AZ): DC's are organized into AZ's. Each AZ are located at lower-risk locations. There are multiple AZ and each of them is separated by geographic region. Each AZ is designed for independent failure zone. Thus they are physically separated. The AZ are inter-connected with high speed private links. Each AZ are located at lower-risk locations. Regions: Each region contains two or more AZ's.

What are the things to consider before you make a move to the cloud?

Before moving any business to the cloud, you need to consider these aspects. You should not lose any of your data, you should be able to continue with your business within the shortest interval. There should be fast uptime, and data integration with the cloud.

What is CI/CD?

Behind every great CD pipeline is a well-oiled continuous integration (CI) pipeline....While the specific CI process can vary slightly based on a team's preference, the essential steps are: Build code. Send code to repository. Test code. Send back errors. Fix code. Repeat step 2 and 3. A CI/CD Pipeline implementation, or Continuous Integration/Continuous Deployment, is the backbone of the modern DevOps environment. It bridges the gap between development and operations teams by automating the building, testing, and deployment of applications. Continuous integration / continuous delivery / continuous deployment, or CI/CD, is a practice that enables application development teams to release incremental code changes to production quickly and regularly. Simply put, CI is the process of integrating code into a mainline code base. ... CD is about the processes that have to happen after code is integrated for app changes to be delivered to users. Those processes involving testing, staging and deploying code.

Explain what Google Cloud is to a customer

Cloud computing is the term describing delivery of computing as a service rather than a product. Shared resources, software and information are provided to computers and other devices as a metered service over a network (typically the internet). Computing clouds provide computation, software, data access and storage resources without requiring cloud users to know the location and other details of the computing infrastructure itself. In fact, if you are looking to disconnect your business from infrastructure concerns, the cloud may be a good way for you to achieve this objective. End users access cloud-based applications or services through a web browser or a lightweight desktop or mobile app (often referred to as a Client) while the business software and data are stored on servers at a remote location (off-site or off-premise). Cloud application providers strive to give the same or better service and performance as if the software programs were installed locally on end-user computers. At the foundation of cloud computing is the broader concept of infrastructure convergence (or Converged Infrastructure) and Shared Services. This type of environment allows businesses to get their applications and other services (phone, storage, content management, disaster recovery, etc.) up and running faster, with significantly reduced up-front costs, easier manageability and less maintenance. Cloud services enable IT to more rapidly adjust IT resources (such as applications, servers, storage and networking) to meet fluctuating and unpredictable business demand. Simply put, cloud computing is the provision of computer services (servers, databases, storage, software, network equipment, analysis, etc.) via the Internet. They usually charge for cloud computing services based on usage in the same way that they charge for water or electricity in a home. Here are some things you can do with the cloud: creation of new applications and services; data storage, backup, and recovery; placement of sites and blogs; audio and video transmission; software delivery upon request; data analysis to look for patterns and forecasts. The most important benefits of cloud computing Cloud computing is very different from the traditional view of companies on the role of IT resources. Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing. Cloud Computing BasicsWhether you are running applications that share photos to millions of mobile users or you're supporting the critical operations of your business, a cloud services platform provides rapid access to flexible and low cost IT resources. With cloud computing, you don't need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Instead, you can provision exactly the right type and size of computing resources you need to power your newest bright idea or operate your IT department. You can access as many resources as you need, almost instantly, and only pay for what you use. How Does Cloud Computing Work?Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. A Cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application. In the simplest terms, cloud technology means storing and accessing data and applications over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. Hosting your applications and data on cloud has many benefits. The figure below explains how users can focus more on their actual business related work and have the underlying components be managed by others who are experts in that field.

Google Cloud SDK

Command line tools and libraries for Google cloud

Google Preemptible VM's

Compute instances for batch jobs and fault tolerant workloads

Containers

Containers are an executable unit of software in which application code is packaged, along with its libraries and dependencies, in common ways so that it can be run anywhere, whether it be on desktop, traditional IT, or the cloud. To do this, containers take advantage of a form of operating system (OS) virtualization in which features of the OS (in the case of the Linux kernel, namely the namespaces and cgroups primitives) are leveraged to both isolate processes and control the amount of CPU, memory, and disk that those processes have access to. Containers are small, fast, and portable because unlike a virtual machine, containers do not need include a guest OS in every instance and can, instead, simply leverage the features and resources of the host OS. Containers first appeared decades ago with versions like FreeBSD Jails and AIX Workload Partitions, but most modern developers remember 2013 as the start of the modern container era with the introduction of Docker. Docker is a open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs or the cloud. Docker is the biggest and most popular container system. It is a way to containerize an application and put it in an image and run on any computer that also has docker on it. It can eliminate the issues of an application run on my computer but does not run on yours. Containers are a way to package software in a format that can run isolated on a shared operating system. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it is deployed. Docker does not take a whole lot of space to run. With Dockers, containers can share bins and libraries. Dockerfile describes the build process for an image, can be run to automatically create an image, and contains all the commands necessary to build the image and run your application.

Use Cases for Containers

Containers are becoming increasingly prominent, especially in cloud environments. Many organizations are even considering containers as a replacement of VMs as the general-purpose compute platform for their applications and workloads. But within that very broad scope, there are key use cases where containers are especially relevant. Microservices: Containers are small and lightweight, which makes them a good match for microservice architectures where applications are constructed of many, loosely coupled and independently deployable smaller services. DevOps: The combination of microservices as an architecture and containers as a platform is a common foundation for many teams that embrace DevOps as the way they build, ship and run software. Hybrid, multi-cloud: Because containers can run consistently anywhere, across laptop, on-premises and cloud environments, they are an ideal underlying architecture for hybrid cloud and multi-cloud scenarios where organizations find themselves operating across a mix of multiple public clouds in combination with their own data center. Application modernizing and migration: One of the most common approaches to modernizing applications starts by containerizing them so that they can be migrated to the cloud.

Google Cloud CDN

Content Delivery Network for delivering web and video. A content delivery network, or content distribution network, is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and performance by distributing the service spatially relative to end users. The main objective of CDN is to deliver content at top speed to users in different geographic locations and this is done by a process of replication. CDNs provide web content services by duplicating content from other servers and directing it to users from the nearest data center.

What workloads are suited for public cloud?

Core privacy data running in workloads that are constant over time, however, are better suited to stay in house. With that model established, let's explore specific workload types. The most obvious candidate for public cloud is a customer-facing marketing website.

Google Big Query

Data Warehouse for business agility and insights BigQuery is a fully-managed, serverless data warehouse that enables scalable analysis over petabytes of data. It is a Software as a Service that supports querying using ANSI SQL. It also has built-in machine learning capabilities. BigQuery provides a REST API for easy programmatic access and application integration. ...

Container Orchestration

Deploying and scaling containers. Azure Container Service, Amazon ECS, Marathon, Kubernetes, Google Container Engine (GKE). Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications. Kubernetes does many things like automating restarting pods if something fails, rolling updates (take down version 1 only when version 2 is fully functional so there is no downtime) Node: Kubelet, Communicates with master, runs pods Pods: Runs 1+ containers, exists on a node Service: Handles requests, usually a load balancer Deployment: Defines desired state - Kubernetes handles the rest. For deployment you can use JSON or Yaml. Yaml is more user friendly.

Cloud Compute Resource Configuration - AWS example

EC2 stands for Elastic Compute Cloud. In short it's a name for a server that you launch in AWS. Depending on the Cloud service provider you choose the name is different. Example, in Digital Ocean it is called Droplet. There are certain important config that you need to know while launching a new EC2 instance in AWS - The CPU and memory size, Operating System (Linux, Windows), Storage Capacity, Authentication Key, Security Group. Creating a EC2 Instance: 1. Choose an Amazon Machine Image (AMI) (i.e. Operating System) 2. Choose and Instance Type 3. Configure Instance Details 4. Add Storage 5. Add Tags 6. Configure Security Group Set up MFA for root - very important, never login as root unless needed

Google Cloud Functions

Event driven compute platform for cloud services and applications that respond to cloud events. Google Cloud Functions is Google's serverless compute solution for creating event-driven applications. ... For Google Cloud Platform developers, Cloud Functions serve as a connective layer allowing you to weave logic between Google Cloud Platform (GCP) services by listening for and responding to events. Architecture Supported: Execution Environment and Supported Runtimes. Google Cloud Functions is a stateless execution environment, which means that the functions follow a shared-nothing architecture. Each running function is responsible for one and only one request at a time

Google Filestore

File storage that is highly secure and scalable

Firewall

Firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. There are both hardware and software firewall rules. Firewall allow connections from trusted users and block from hackers. Apache 80, SSH 22, FTP 21, SMTP 25, MySQL 3306. AWS it is called Security Groups. Linux based it is called IP tables.

Flexera Solutions

Flexera offers solutions that optimize IT assets from on-premises to cloud. 1. Technology Insights - Gain visibility. Gain control. Gain an edge. Flexera's IT optimization and management software will shine a light into the corners of your IT ecosystem to illuminate insights that drive better business decisions. 2. Technology Spend Optimization - From software licenses to cloud costs, Flexera helps you put a hard stop on technology overspend, so you can put those savings to better use. 3. Technology Agility - Technology agility is business agility. Flexera help you spot opportunities and threats, streamline IT operations, and drive the nimble transformation business demands.

Google Cloud Run

Fully managed environment for running containerized applications

Google Cloud GPU's

GPU's for ML, scientific computing and 3D visualization

What is BeyondCorp?

Google's implementation of the zero trust security model. Built upon 8 years of building zero trust networks at Google, combined with ideas and best practices from the community. Shifts access controls from the network perimeter to individual users and devices and therefore allows employees, contractors, and other users to work more securely from virtually any location without the need for a traditional VPN. Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. This added layer of security has been shown to prevent data breaches.

Compare a cloud and a traditional datacenter?

Having a traditional datacenter and a cloud datacenter has a lot of differentiating factors. Here we list some of the most important differences between a traditional and a cloud datacenter. The initial cost is high for traditional datacenter whereas for a cloud it is less You can easily scale up and scale down the cloud data centers but that is not possible with the traditional datacenter Maintenance is a huge cost for traditional datacenters unlike the cloud datacenter Cloud offers excellent uptime which cannot be said about traditional datacenters.

What is the benefit of API in the cloud domain?

Here we list the important benefits of API with respect to the cloud domain: You don't have to write complete program You can easily communicate between one application and another You can easily create applications and link it to the cloud services It seamlessly connects two applications in a secure manner.

Block vs Object Storage

In Block Storage the data is stored in terms of block. Data stored in blocks are normally read or written entirely a whole block at the same time. Every block has a specific address and application can be called via SCSI call via its address. There is no storage side meta-data associated with the block except the address. Thus block has no description, no owner. For Linux we have XFS and EXT these are all block storage. Object storage is a data storage architecture that manages data as objects as opposed to blocks of storage. An object is defined as a data file with all of its meta data which is combined together as a specific object. AWS S3 is a type of the Object Storage. You can also create a custom meta data in S3 according to your requirements. The object is given an ID which is calculated from the content of the object (from the data and metadata). Application can then call the object with the unique object ID. Object Storage can be called via API. Object Storage is not as fast as block storage but has its own advantage in the industry.

Cloud Governance - How do you sustain success #7

Industries, compliance, markets, organizations, cloud computing are all dynamic. To keep up with change, there are minimum activities and metrics that should be part of your initial program charter and procedures from the start. Those minimum activities include: 1. A regular assessment of the cloud governance program. Once a year is usually enough. It is best to time the publication of the review and assessment report to provide insight for such things as contract reviews, meetings of the external board, audits, applications for accreditation and annual reports. 2. An ongoing collection and review of data to compare to benchmarks. A quarterly collection and publication of results is most effective. 3. Frequent, two-way communication on the governance program. Solicit feedback, respond to feedback. 4. Education on how to use governance controls in day-to-day operations. 5. Maintain a backlog of governance improvement actions to increase the level of maturity and fix issues discovered during the assessment The metrics, key performance indicators (KPIs) and activities necessary to sustain a governance program should be built in from the start. The kinds of KPIs and metrics that support the value and evolution of the cloud governance program include at minimum: 1. Ratio of planned versus actual cloud services 2. Frequency of exceptions to policies 3. Operational Efficiency 4. Average time to onboard 5. Cost reduction 6. % of total and departmental budgets allocated to cloud services. 7. Business Value Alignment - how this will be measured is dependent, in part, on how the organization measures project results (earned vs. planned value, cost variance, etc.) 8. Used vs. idle cloud services 9. % of business service-level requirements met

Application Modernization

Legacy applications are also often monolithic applications. Monolithic applications have two characteristics that make it desirable to modernize them: they are difficult to update, and they are difficult and expensive to scale. Monolithic apps are difficult to update for architectural reasons. Because all of an application's components ship together, it is difficult and costly to add features given the overhead of complexity and integration challenges. They are challenging and costly to scale for similar reasons. If even one component of an app is facing load and performance challenges, it can become necessary to scale up the entire app only to server the most-demanding single component. This approach comes with considerable wasted compute. By modernizing an application to more of a microservices architecture, components are smaller, loosely coupled, and can be deployed and scaled independently of one another. The most important way to start any application modernization project is with an application assessment. Taking an inventory of what you have is almost always one of the most obvious ways to start just about any transformation. Once you have a list, you can start plotting all of those applications against an x and y axis of ease/difficulty and potential increase value if modernized. The applications that fall into the top-right quadrant of this grid of high value and low effort will be the most-obvious, least contentious candidates with which to begin an application modernization project. The most common pattern of application modernization involves refactoring and breaking down a monolithic application into a collection of small, loosely coupled microservices. One approach in this space is known as "strangler pattern". Instead of breaking down the monolith all at once, the strangler pattern is about taking the application apart bit by bit, pulling out the easiest and most valuable parts first and as this approach progresses, eventually there is nothing left of the monolith. Often part of refactoring to microservices, replatforming or rehosting applications is almost always part of the modernization process. While it's possible to simply lift and shift applications without doing much of a substantial rewrite, more often, the value is found in restructuring the application to better take advantage of cloud models, often leveraging containers and Kubernetes. Finally another approach to modernization can involve leaving an application in place but securely exposing its functions or data via APIs. This approach, based more heavily on integration than migration, enables, new, cloud native applications to simply take advantage of the capabilities of existing systems and data. Key technologies for application modernization: - Private, hybrid and multi-cloud: while the public cloud is a critical part of any modernization strategy, private, hybrid and multi-cloud strategies are also critically important for security, latency and architectural reasons. For any number of reasons, an organization might not be ready to go straight from the data center to public cloud, and the other cloud models can help solve for all the architectural and policy complexity associated with where certain workloads need to live based on their unique characteristics. - Containers and Kubernetes: Containers and Kubernetes have emerged not only as a challenger to VM's as a form of all-purpose compute in the cloud, but as a key enabler of hybrid cloud and application modernization strategies. Containerization enables an application to be packaged in consistent, lightweight ways so that they can run consistently across desktop, cloud, or on-premises environments. This type of flexibility is a real benefit to organizations charting their path forward in the cloud. Kubernetes is a orchestration tool allowing you to run container based workloads.

Cloud Governance - Benchmark #2

Measure the organizations maturity and goodness of cloud - Maturity models are useful for assessing gaps in process and standards that interfere with establishing and maintaining IT and corporate governance. There are several maturity models to choose from: • The CMMI Institute's Capability Maturity Model Integration (CMMI) has evolved from a model specific to software engineering into a model for assessing business maturity according to process and metrics [5]. • The COBIT maturity models encompass IT and IT governance, and are less complete in their consideration of business governance. • The Open Data Center Alliance (ODCA) offers a cloud maturity model, CMM 3.0, inclusive of the two key perspectives: business capabilities and technology capabilities. [6]

Google Operations

Monitoring, logging and application performance suite

Difference between MySQL and NoSQL

MySQL is a relational database that is based on tabular design whereas NoSQL is non-relational in nature with its document-based design. ... MySQL is being used with a standard query language called SQL whereas NoSQL like databases misses a standard query language. SQL databases are relational, NoSQL are non-relational. ... SQL databases are vertically scalable, NoSQL databases are horizontally scalable. SQL databases are table based, while NoSQL databases are document, key-value, graph or wide-column stores. NoSQL DB - MongoDB, HBase, Oracle NoSQL In most SQL databases, they are vertically scalable, which means that you can increase the load on a single server by increasing components like RAM, SSD, or CPU. In contrast, NoSQL databases are horizontally scalable, which means that they can handle increased traffic simply by adding more servers to the database.

Google Cloud Storage

Object storage that is secure, durable and scalable

SaaS

Offers on demand pay for use of software, pay per use of application software to users, platform independent, no need to install software on your PC, runs a single instance of the software, available for multiple end users, cloud computing cheap, computing resources managed by vendor, accessible via. A Web Browser or Lightweight Client Applications. Examples - MS O365, Google Drive, Salesforce. If your business does not want to maintain any IT equipment, then choose SaaS. - Pros: Universally accessible from any platform, no need to commute, you can work from anywhere, excellent for collaborative working, vendor provides modest software tools, allows for multi-tenancy - Cons: Portability and browser issues, network performance may dictate overall performance, compliance restrictions

Differences between On-Prem, IaaS, PaaS, SaaS(Customer responsibility)

On-Prem: Applications, Data, Runtime, Middleware, OS, Virtualization, Server, Storage, Networking IaaS: Applications, Data, Runtime, Middleware, OS PaaS: Applications, Data SaaS: N/A

Cloud Authentication

Password based authentication - there can be multiple methods for authentication against a system. Password based authentication is the simplest form. Password based authentication is generally considered to be less-secure. Many users write own the passwords in notepad files or as part of sticky notes. Most users would not create a complex password that is difficult to hack. Depending on the cloud service provider, password based authentication may not be allowed. Key based authentication - in this type of authentication, there are two special keys that are generated, public key and private key. If public key is stored in server and is used as authentication mechanism only the corresponding private key can be used to successfully authenticate. For Linux based servers only key based authentication is allowed.

AWS Well Architected Framework - Operation Excellence

Perform operations as code: In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events. By performing operations as code, you limit human error and enable consistent responses to events. • Annotate documentation: In an on-premises environment, documentation is created by hand, used by people, and hard to keep in sync with the pace of change. In the cloud, you can automate the creation of annotated documentation after every build (or automatically annotate hand-crafted documentation). Annotated documentation can be used by people and systems. Use annotations as an input to your operations code. • Make frequent, small, reversible changes: Design workloads to allow components to be updated regularly. Make changes in small increments that can be reversed if they fail (without affecting customers when possible). • Refine operations procedures frequently: As you use operations procedures, look for opportunities to improve them. As you evolve your workload, evolve yourprocedures appropriately. Set up regular game days to review and validate that all procedures are effective and that teams are familiar with them. • Anticipate failure: Perform "pre-mortem" exercises to identify potential sources of failure so that they can be removed or mitigated. Test your failure scenarios and validate your understanding of their impact. Test your response procedures to ensure that they are effective, and that teams are familiar with their execution. Set up regular game days to test workloads and team responses to simulated events. • Learn from all operational failures: Drive improvement through lessons learned from all operational events and failures. Share what is learned across teams and through the entire organization.

What are the various layers in the cloud architecture?

Physical layer: This constitutes the physical servers, network and other aspects Infrastructure layer: This layer includes storage, virtualized layers and so on Platform layer: This includes the operating system, apps and other aspects Application layer: This is the layer that the end-user directly interacts with.

Google Cloud SQL

Relational Database services for MySQL, PostgreSQL, SQL Server

What is REST API?

Representational state transfer is a software architectural style that defines a set of constraints to be used for creating Web services. Web services that conform to the REST architectural style, called RESTful Web services, provide interoperability between computer systems on the internet. An application implementing a RESTful API will define one or more URL endpoints with a domain, port, path, and/or querystring — for example, https://mydomain/user/123?format=json . Examples: ... a PUT request to /user/123 updates user 123 with the body data. a GET request to /user/123 returns the details of user 123. This is because REST is the most logical, efficient and widespread standard in the creation of APIs for Internet services. To give a simple definition, REST is any interface between systems using HTTP to obtain data and generate operations on those data in all possible formats, such as XML and JSON.

Mention what is the difference between elasticity and scalability in cloud computing?

Scalability in cloud is the way in which you increase the ability to service additional workloads either by adding new servers or accommodating it within the existing servers. Elasticity is the process by which you can either add or remove virtual machines depending on the requirement in order to avoid wastage of resources and reduce costs.

Microservices

Separate business logic functions (ex. Login, authentication, etc.), instead of one big program, several smaller applications, communicate via well defined APIs usually HTTP, In demand. - Advantages: language independent, allows each microservices to be split up into each team, fast iterations, small teams, fault isolation, pair well with containers, scalable - Disadvantages: complexity - complex networking, overhead - servers, databases (need to know lot of knowledge, architecture and containerization)

Google Migrate for Compute Engine

Server and virtual machine migration to Compute Engine

Google App Engine

Serverless application platform for apps and backends

Explain what are the different modes of software as a service (SaaS)?

Single multi-tenancy: In this type of SaaS you have your own independent resources that you don't share with anybody Fine grain multi-tenancy: In this type of SaaS deployment the resources are shared between multiple tenants even though the functionalities remain the same.

Containerization

Software needs to be designed and packaged differently in order to take advantage of containers—a process commonly referred to as containerization. When containerizing an application, the process includes packaging an application with its relevant environment variables, configuration files, libraries, and software dependencies. The result is a container image that can then be run on a container platform.

What are some of the popular open source cloud computing platforms?

Some of the important open source cloud computing platforms are as below OpenStack Cloud Foundry Docker Apache Mesos KVM

Describe the security aspects that the cloud offers?

Some of the important security aspects that the cloud offers are as below: Access control: it offers the control to the users who can control the access to other users who are entering the cloud ecosystem Identity management: this provides the authorization for the application services Authorization and authentication: this security feature lets only the authenticated and authorized users to access the applications and data.

AWS Well Architected Framework - General Principles

Stop guessing your capacity needs: Eliminate guessing about your infrastructure capacity needs. When you make a capacity decision before you deploy a system, you might end up sitting on expensive idle resources or dealing with the performance implications of limited capacity. With cloud computing, these problems can go away. You can use as much or as little capacity as you need, and scale up and down automatically. • Test systems at production scale: In the cloud, you can create a production-scale test environment on demand, complete your testing, and then decommission the resources. Because you only pay for the test environment when it's running, you can simulate your live environment for a fraction of the cost of testing on premises. • Automate to make architectural experimentation easier: Automation allows you to create and replicate your systems at low cost and avoid the expense of manual effort. You can track changes to your automation, audit the impact, and revert to previous parameters when necessary. Allow for evolutionary architectures: Allow for evolutionary architectures. In a traditional environment, architectural decisions are often implemented as static, one-time events, with a few major versions of a system during its lifetime. As a business and its context continue to change, these initial decisions might hinder the system's ability to deliver changing business requirements. In the cloud, the capability to automate and test on demand lowers the risk of impact from design changes. This allows systems to evolve over time so that businesses can take advantage of innovations as a standard practice. • Drive architectures using data: In the cloud you can collect data on how your architectural choices affect the behavior of your workload. This lets you make fact-based decisions on how to improve your workload. Your cloud infrastructure is code, so you can use that data to inform your architecture choices and improvements over time. • Improve through game days: Test how your architecture and processes perform by regularly scheduling game days to simulate events in production. This will help you understand where improvements can be made and can help develop organizational experience in dealing with events

Cloud Governance - Establishing Governance Measures and Related Metrics # 6

Strategy & Roadmap Plan/Adopt 4 % of budget allocated to IT 4 % of IT infrastructure resource utilization 4 % of IT human resource utilization 6 % variance in schedule 3 Average time to deploy 3 Average time to onboard 1 Ratio of planned vs. actual cloud services 4 % of cost reduced from moving applications into cloud environments 4 Delta between actual spending and budget 5 Total revenue generated by tech dpt 2 External customer Net Promoter score 5 Average time to deliver new service 1 Average time to educate and train all staff on new services available 7 Total number of contracts with single cloud provider Reference Architecture 5 % of business service level requirements met 6 % of service provider exceptions /service provider integrations 2 % of SOA-based services consumed in Cloud RA 2 % of existing projects that are not part of Cloud transformation Service Reuse 5 % of services registered / # of services reused 1 % of service compatibility reviews exercised 1 % of cloud-ability reviews exercised 5 % of idle services decreased 1 % of service provider usage reviews exercised 5 Total revenue generated by tech dpt 4 % of cloud resources repurposed from existing resources Application Services Build Rehost 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Migrate 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Build 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 4 % of cloud resources repurposed from existing resources 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Extend 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Test 4 % of IT human resource utilization 4 % of budget allocated to IT 2 % of developers that have self-service access to cloud resources 3 Average time to onboard Software Services Build Setup 4 % of IT human resource utilization 4 % of budget allocated to IT 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Customize 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 2 % of developers that have self-service access to cloud resources Integrate 4 % of IT human resource utilization 4 % of budget allocated to IT 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Test 4 % of IT human resource utilization 4 % of budget allocated to IT 2 % of developers that have self-service access to cloud resources 3 Average time to onboard Infrastructure Services Build Build 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 4 % of cloud resources repurposed from existing resources 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Host 4 % of IT human resource utilization 4 % of budget allocated to IT 5 Average time to deliver new service 4 % of cost reduced from moving applications into cloud environments 5 Total revenue generated by tech dpt 4 Delta between actual spending and budget 1 Average time to educate and train all staff on new services available 2 % of developers that have self-service access to cloud resources Service Deployment Subscribe 6 % of enterprise services subscribed 2 % of enterprise cloud services subscribed 7 % of compliance with security policies 2 % of cloud apps available via mobile device Consume 2 Actual against expected consumption 2 Percentage of consumption patterns (IaaS, SaaS, PaaS) 1 Frequency of exceptions 7 Severity of exceptions 5 Number of consumer/provider combinations impacted by exceptions 6 Number of SLAs impacted by exceptions Unsubscribe 2 Average # of subscribers per service 2 Rate of change to subscriber count 6 % of unused services 5 Ratio of # of subscriptions / # of un-subscriptions 5 % of unsubscribed potential customers 7 Number of incidents related to unsubscribe Management Service Operate/Ma nage/Auto mate 5 % of requirements addressed 2 % utilization of services (IaaS, SaaS, PaaS) 2 % of resources utilized 2 % of service requests 3 % of incidents reported 6 Average time-to-resolution 5 % of cloud processes automated 1 Total number of DR tests per year for all apps 7 Total number of contracts with single cloud provider Monitor 2 % of developers that have self-service access to cloud resources 6 Average cloud-based application response time 3 Spend on over-provisioning of cloud resources 3 Average time of VM in cloud environment Retire 2 Frequency of service usage 6 % of unused services 6 % of redundant services 3 Average utilization of resources 3 # of incidents reported 6 Number of service complaints

Google Dataflow

Streaming analytics for stream and batch processing

How important is the platform as a service?

The Platform as a Service is important in cloud computing. You get an application layer and it lets you have complete virtualization of the infrastructure layer and this way you can work on it like a single layer.

Difference between MySQL and PostgreSQL

The architectural difference between MySQL and PostgreSQL is that MySQL is a relational database management system whereas, PostgresSQL is object-relational database management system. This means that Postgres includes features like table inheritance and function overloading, which can be important to certain applications. Postgres also adheres more closely to SQL standards MySQL is the product of Oracle Corporation while PostgreSQL is a product of Global Development Group. My SQL programming language is not extensible whereas, the programming language PostgreSQL is highly extensible. MySQL provides temporary tables but does not provide materialized view. However, PostgreSQL provides temporary table and also the materialized view. Which one is better depends on the need.

Cloud Governance - Establishing a Cloud Governance Model #3

The categories of concern for cloud governance can be broadly summarized as: 1. Masked Complexity associated with the integration of legacy environments, managing new and updated service workflows, microservices and event-driven applications, and managing multiple service providers as part of a single business or technical architecture. 2. Organizational Dynamics as new ways to build, test and release software are introduced into the organization, along with the ease and simplicity with which lines of business may purchase XaaS. 3. Risks: Increase in security and compliance considerations across environments as data is shared or systems are integrated. Complex systems combining offerings from multiple service providers create a potential for operational disruption if there are changes in provider(s). Financial exposure and indemnities for failure to meet contractual obligations also change. 4. Metrics: new dynamics also means coming up with new ways to measure productivity, service levels and quality. 5. New Financial Models: this includes understanding current accounting guidelines and designing financial strategy and policies to make the best decisions about when to capitalize SaaS or other cloud computing costs (in countries where it is allowed) vs. keeping them as an operating expense. It also includes the need to build guidance for when monetization of data is acceptable. 6. Capacity to Adapt: lines of business, operational functions and technology each need guidelines on how to maintain data privacy, observe data residency rules, and protect intellectual property, while adopting new services or technologies. Business lines, in particular, must address how new technologies such as virtualization and artificial intelligence will change operations, and the speed at which these evolutions will take place.

What are system integrators in Cloud Computing?

The cloud can consist of multiple components that can be complex. The system integrator in the cloud is the strategy that provides the process of designing the cloud, integrating the various components for creating a hybrid or a private cloud network among other things.

Containers vs. VMs

The easiest way to understand a container is to understand how it differs from a traditional virtual machine (VM). In traditional virtualization —whether it be on-premises or in the cloud—a hypervisor is leveraged to virtualize physical hardware. Each VM then contains a guest OS, a virtual copy of the hardware that the OS requires to run, along with an application and its associated libraries and dependencies. Instead of virtualizing the underlying hardware, containers virtualize the operating system (typically Linux) so each individual container contains only the application and its libraries and dependencies. The absence of the guest OS is why containers are so lightweight and, thus, fast and portable.

What are Cloud Workloads?

The work function (application or service) processed by a remote server or instance at any given time; it generally has users or applications interacting with it through the Internet. Cloud workloads can range from a web server to a database to a container. In computing, the workload is the amount of processing that the computer has been given to do at a given time. The workload consists of some amount of application programming running in the computer and usually some number of users connected to and interacting with the computer's applications. Workloads are the resources running on cloud. Ex. Virtual Machines, databases, applications eetc,,. This term in business usually refers to anything that is run on a cloud by tenant as a service

Benefits of Containers

The primary advantage of containers, especially compared to a VM, is providing a level of abstraction that makes them lightweight and portable. Lightweight: Containers share the machine OS kernel, eliminating the need for a full OS instance per application and making container files small and easy on resources. Their smaller size, especially compared to virtual machines, means they can spin up quickly and better support cloud-native applications that scale horizontally. Portable and platform independent: Containers carry all their dependencies with them, meaning that software can be written once and then run without needing to be re-configured across laptops, cloud, and on-premises computing environments. Supports modern development and architecture: Due to a combination of their deployment portability/consistency across platforms and their small size, containers are an ideal fit for modern development and application patterns—such as DevOps, Serverless, and Microservices—that are built are regular code deployments in small increments. Improves utilization: Like VMs before them, containers enable developers and operators to improve CPU and memory utilization of physical machines. Where containers go even further is that because they also enable microservice architectures, application components can be deployed and scaled more granularly, an attractive alternative to having to scale up an entire monolithic application because a single component is struggling with load.

Cloud Deployment models

There are four deployment models - private cloud (car), public cloud (bus) and hybrid (rent a private taxi) cloud; and multi-cloud (time-share)

Level of Cloud Adoption

These measures can be tracked to determine the extent to which cloud computing has been adopted across the enterprise. • % of existing projects that are not part of cloud transformation • % of service requests • % of enterprise cloud services subscribed • Frequency of service usage • Average # of subscribers per service • % of SOA-based services consumed in cloud RA • Actual against expected consumption • % utilization of services (IaaS, SaaS, PaaS) • Percentage of consumption patterns (IaaS, SaaS, PaaS) • % of resources utilized • Rate of change to subscriber count • % of cloud apps available via mobile device • External customer Net Promoter scores • % of developers that have self-service access to cloud resources

Business Value Alignment

These measures can be used to drive the extent to which cloud computing adoption is in alignment with the overall business objectives for the enterprise % of idle services decreased • % of services registered / # of services reused • % of business service-level requirements met • % of unsubscribed potential customers • Ratio of # of subscriptions/# of unsubscriptions • % of requirements addressed • Number of consumer/provider combinations impacted by exception • Total revenue generated by tech dpt. • % of cloud processes automated • Average time to deliver new services

Service-Driven Integration

These measures can be used to drive the extent to which cloud deployment is building upon the existing services based ecosystem (as per the service catalog) within the enterprise % of service provider exceptions/service provider integrations • % of unused services • % of enterprise services subscribed • % of redundant services • Number of SLAs impacted by exceptions • Number of service complaints • Average time-to-resolution • Average cloud-based application response time

Risk Mitigation

These measures can be used to drive the extent to which preventive measures that can be taken to avoid potential risks generated from cloud adoption • % of compliance with security policies • % variance in schedule • Number of incidents related to unsubscribe • Severity of exceptions • Total number of contracts with single cloud provider

Cost Reduction

These measures can be used to drive the funding of cloud computing transformations across the enterprise % of IT human resource utilization • % of IT infrastructure resource utilization • % of budget allocated to IT • % of cost reduced from moving applications into cloud environments • Delta between actual spending and budget • % of cloud resources repurposed from existing resources

Operational Efficiency

These measures can be used to track the parameters that drive the operational efficiency for the ongoing sustenance of application and infrastructure components in the cloud % of incidents reported • Average time to deploy • Average time to onboard • Spend on over-provisioning of cloud resources • Average time of VM in cloud environment • Average utilization of resources

Level of Cloud Computing Governance

These measures help establish the extent to which cloud computing governance is in place across the enterprise • % of cloud-ability reviews exercised • % of service compatibility reviews exercised • % of service provider usage reviews exercised • Ratio of planned versus actual cloud services • Frequency of exceptions • Average time to educate and train all staff on new services available • Total number of DR tests per year for all apps

PaaS

This service is made up of a programming language execution environment, an operating system, a web server and a database. It encapsulates the environment where users can build, compile and run their programs without worrying of the underlying infrastructure. In this model, you manage data and application resources, all other resources are managed by the vendor. Examples, AWS Elastic Beanstalk, Windows Azure Heroku, force.com, Google app engine. If your company requires a platform for building software products, pick PaaS. - Pros: cost effective rapid development, faster market for developers, easy deployment of web applications, private or public deployment is possible - Cons: Developers are limited to the provider's language and tools, migration issues such as the risk of vendor lock in

IaaS

This service offers the computing architecture and infrastructure, all computing resources but in a virtual environment so that multiple users can access them, Resources include, Data Storage, Virtualization, Servers and Networking. Most vendors are responsible for managing the above four resources. User will be responsible for handling other resources such as applications, data, runtime and middleware. Examples - Amazon EC2, Rackspace, GoGrid. If your business needs a virtual machine, opt for IaaS. - Pros: The cloud provides the infrastructure, enhanced scalability - dynamic workloads are supported, IaaS is flexible - Cons: Security issues, network and service delay AWS - Amazon RDS, EC2, S3, IoT, Elastic Beanstalk

When you are transferring data how do you ensure it is secure?

To ensure that the data which is being transported is secure you should check the encryption key implemented and there is no leak in the data.

VPC - Virtual Private Cloud

Use Case: John works as Finance Manager and he is migrating to a different place. Analogy - 5 servers are planned to be migrated to cloud. The location where they can be launched in AWS is VPC. John has decided to buy a new house and for additional income he has decided to rent part of his house. John decides to partition his house. Among the 5 servers we have decided following architecture - launch 3 servers in first partition (subnet 1) and launch 2 server in second partition (subnet 2). John has decided to have additional layer of security of each partition. John decided to build two entrance so that individuals in partition 1 can directly go out of the house. Hypothetically imagine that no doors for partition 2 have been build. Solutions Architect creates a route so that servers from subnet 1 can connect to the internet. For subnet 2, no routes has been created hence servers cannot reach the internet. Some subnets are very secure with sensitive data and therefore they are very secured and isolated. Inter-communications - there can be need for servers from two subnets to speak with each other. So you can create a route that can allow communication between two subnets. This can be done with the help of routing. In AWS, one VPC is launched by default when you create a new server. Additional VPC's can be created by the architect. Each subnet can have maximum of certain number of servers. If more servers need to be added you need to add more capacity. Every EC2 instance that we create should be under a VPC. EC2 instance launched in a VPC will be protected or not protected based on configuration of VPC Architecture changes when you go into more technical aspect.

Utility Computing

Utility computing is the service wherein you get pay-as-you-go and on-demand services in which the provider offers to manage and operate the computing services and you can choose which services to access which are all deployed in the cloud.

VPC Peering

VPC peering is a network connection between two VPC that enables the communication between instances of both the VPC. VPC Peering is now possible between regions. VPC Peering does not act like a Transit VPC. VPC Peering can also be possible between multiple accounts. You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks. You have a VPC peering connection between VPC A and VPC B (pcx-aaaabbbb), and between VPC A and VPC C (pcx-aaaacccc). There is no VPC peering connection between VPC B and VPC C. You cannot route packets directly from VPC B to VPC C through VPC A.

Google Compute Engine

Virtual Machines running in Google data center. Google Compute Engine is the Infrastructure as a Service component of Google Cloud Platform which is built on the global infrastructure that runs Google's search engine, Gmail, YouTube and other services. Google Compute Engine enables users to launch virtual machines on demand. Compute Engine lets you create and run virtual machines on Google infrastructure. Compute Engine offers scale, performance, and value that lets you easily launch large compute clusters on Google's infrastructure. There are no upfront investments, and you can run thousands of virtual CPUs on a system that offers quick, consistent performance.

Why do you need the virtualization platform to implement cloud?

Virtualization lets you create virtual versions of the storage, operating systems, applications, networks and so on. If you use the right virtualization then it helps you to augment your existing infrastructure. You are able to run multiple apps and operating systems on existing servers.

Cloud Governance - Understand #1

What is cloud governance? It may be a set of agreed upon policies and standards which is based on risk assessments and agreed upon framework. The introduction of cloud computing into an organization affects roles, responsibilities, processes and metrics. Without cloud governance in place to provide guidelines to navigate risk and efficiently procure and operate cloud services, an organization may find itself faced with these common problems: • Misalignment with enterprise objectives • Frequent policy exception reviews • Stalled projects • Compliance or regulatory penalties or failures • Budget overruns • Incomplete risk assessments

Cloud Governance - Alignment #4

While there are consistencies in IT operations and governance models globally, there is much less standardization for corporate governance models and frameworks, which differ by country. In the U.S., corporate governance is largely bound by state law, and for publicly traded companies, the Securities Exchange Commission (SEC). Internationally, compliance for both IT and corporate governance is often at the country level. What this means for multi-national organizations is that the effort to align corporate governance to cover cloud governance might require more effort than to align IT governance.

Google Cloud Anthos

is a hybrid, cloud-agnostic container environment. Google Cloud Anthos is a software product that enables enterprises to use container clusters instead of cloud virtual machines (VMs) to bridge gaps between legacy software and cloud hardware. Google Cloud Anthos, sometimes shortened to just Anthos, was initially launched under the name of Google Cloud Services Platform and was later rebranded into Google Cloud Anthos in 2019.


Kaugnay na mga set ng pag-aaral

Landforms and resources of United States and Canada

View Set

Week 1: Congenital Anomalies of the Posterior eye PART 2

View Set

GI NCLEX review e12 upper/lower GI

View Set

Pcsst Chemistry Final review Conceptual

View Set