Google Cloud Associate Engineer - 367

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

App Engine Random Selection splitting

is useful when you evenly want to distribute workloads.

Resetting an VM

Will restart the VM, the properties of the VM will not change, but any data in memory will be lost.

Virtual Machine Configuration Details

- Name of the VM - Region and Zone where the VM will run - Machine type, which determines the number of CPUs and the amount of memory in the VM - Boot disk, which includes the OS the VM will run on and boot disk type, which can be either Standard Persistent Disk or SSD Persistent Disk, you can specify the size of the disk. - Identity and API access where you can specify a service account for the VM and set the scope of API access

Container Registry

A GCP service for storing container images. Once you have created a registry and pushed images to it, you can view the contents of the registry and the image details using Cloud Console, Cloud SDK, and Cloud Shell.

Limitations of Preemptible Virtual Machines

- May terminate at any time, if they terminate within 10 minutes of starting, you will not be charged for that time. - Will be terminated within 24 hours. - May not always be available, availability may vary across zones and regions. - Cannot migrate to a regular VM - Cannot be set to automatically restart - Are not covered by any service level agreement (SLA)

Guidelines for Planning, Deploying, and Managing VMs

- Choose machine type with the fewest CPUs and the smallest amount of memory that still meets your requirements, including peak capacity. This will minimize the cost of the VM. - Use the console for ad hoc administration of VMs. Use scripts with gcloud commands for tasks that will be repeated. - Use startup scripts to perform software updates and other tasks that should be performed on startup. - If you make many modifications to a machine image, consider saving it and using it with new instances rather than running the same set of modifications with every instance. - If you can tolerate unplanned disruptions, use preemptible VMs. - Use SSH or RDP to access a VM to perform operating system-level tasks. - Use Cloud Console, Cloud Shell, or Cloud SDK to perform VM-level tasks

Factors to consider when running VMs in Zones & Regions

- Cost, which can vary by region - Data locality regulations, such as keeping data about EU citizens in the EU. - High availability, if you are running multiple instances, you may want them in different zones and possibly different regions. - Latency, keeping instances and data geographically close to users can help reduce latency. - Need for specific hardware platforms, which can vary by region.

Guidelines to consider when choosing a Storage Solution

- Read and Write Patterns: Some applications, such as accounting and retail sales applications, read and write data frequently. There are also frequent updates in these applications. They are best served by a storage solution such as Cloud SQL if the data is structured; however, if you need a global database that supports relational read/write operations, then Cloud Spanner is a better choice. If you are writing data at consistently high rates and in large volumes, consider Bigtable. If you are writing files and then downloading them in their entirety, Cloud Storage is a good option. - Consistency: If you need a strong consistency, which is always reading the latest data, then Cloud SQL and Cloud Spanner are good options. Datastore is a good option if your data is unstructured; otherwise, consider one of the relational databases. - Transaction Support: The relational databases Cloud SQL and Spanner, and Datastore provide transaction support. - Cost: The cost of using a particular storage system will depend on the amount of data stored, the amount of data retrieved or scanned, and per-unit charges of the storage system. If you are using a storage service in which you provision VMs, you will have to account for that cost as well. - Latency: Bigtable provides consistently low millisecond operations. Spanner can have longer latencies, but with those longer latencies you get a globally consistent, scalable database.

Guidelines for Managing Virtual Machines

- Use labels and descriptions, this will help you identify the purpose of an instance and also help when filtering lists of instances. - Used managed instance groups to enable autoscaling and load balancing. These are key to deploying scalable and highly available services. - Use GPUs for numeric-intensive processing, such as machine learning and high-performance computing. - Use snapshots to save the state of a disk or to make copies. These can be saved in Cloud Storage and used as backups. - Use preemptible instances for workloads that can tolerate disruption. This will reduce the cost of the instance by up to 80 percent.

Commands to create topics and subscriptions in Pub/Sub

- gcloud pubsub topics create [TOPIC-NAME] - gcloud pubsub subscriptions create [SUBSCRIPTION-NAME] --topic [TOPIC-NAME]

Most important commands for working with instances

--account, specifies a GCP account to use overriding the default account --configuration, uses a named configuration file that contains key value pairs --flatten, generates separate key-value records when a key has multiple values --format, specifies an output format, such as a default (human readable) CSV, JSON, YAML, text, or other possible options --help, displays a detailed help message --project, specifies a GCP project to use, overriding the default project --quiet, disables interactive prompts and uses defaults --verbosity specifies the level of detailed output messages. Options are debug, info, warning, and error.

Commonly used parameters with create instance command

--boot-disk-size, the size of a boot disk for a new disk --boot-disk-type, the type disk --labels, the list of key-value pairs in the format of KEY=VALUE --machine-type, the type of machine to use. If not specified, n1-standard-1 machine will be used. --preemptible, if included, specifies that VM will be preemptible

App Engine Admin

Grants read, write, and modify permission to application and configuration settings. The role name used in gcloud commands is roles/appengine.appAdmin

Infrastructure as a Service (IaaS)

A computing service where customers can create and manage VMs themselves. This model gives the cloud user the greatest control of all the computing services. Users can choose the OS to run, which packages to install, and when to backup and perform other maintenance operations. Compute Engine is GCP's IaaS product.

Cloud Firestore

A GCP-managed NoSQL database service designed as a backend for highly scalable web and mobile applications. One advantage of Cloud Firestore is that it is designed for storing, synchronizing, and querying data across distributed applications, like mobile apps. Apps can be automatically updated in close to real time when data is changed on the backend. Cloud Firestore supports transactions and provides multiregional replication.

Kubernetes Cluster Architecture

A Kubernetes cluster includes a cluster master node and one or more worker nodes. The master node manages the cluster. Cluster services such as the Kubernetes API server, resource controllers, and schedulers, run on the master. The Kubernetes API Server is the coordinator for all communications to the cluster. The master determines what containers and workloads are run on each node. The master can be replicated and distributed for high-availability and fault tolerance. When a Kubernetes cluster is created from either Google Cloud Console or a command line, a number of nodes are created as well. These are Compute Engine VMs. The default VM type is n1-standard-1, but you can specify a different machine type when creating the cluster. Kubernetes deploys containers in groups called pods. Containers within a single pod share storage and network resources. Containers within a pod share and IP address and port space. A pod is a logically single unit for providing a service. Containers are deployed and scaled as a unit.

Cloud Datastore

A NoSQL document database. This database uses the concept of a document, or collection of key-value pairs, as a basic building block. Documents allow for flexible schemas. For example, a document about a book may have key-value pairs listing author, title, and date of publication. Cloud Datastore is accessed via a REST API that can be used from applications running in Compute Engine, Kubernetes Engine, or App Engine. This databases will scale automatically based on load. It will also shard, or partition data as needed to maintain performance. Datastore is a managed service, that takes care of replication, backups, and other database administration tasks. Cloud Datastore is well suited for applications that demand high scalability and structured data and do not always need strong consistency when reading data. Datastore is used for nonanalytic, non relational storage needs. Product catalogs, user profiles, and user navigation history are examples of the kinds of applications that use Cloud Datastore.

Kubernetes Replica Set

A Replica Set is a controller used by a deployment that ensures the correct number of identical pods that are running. For example, if a pod is determined to be unhealthy, a controller will terminate that pod. The Replica Set will detect that not enough pods for that application or workload are running and will create another. Replica Sets are also used to update and delete pods.

Kubernetes Services

A Service in Kubernetes, is an object that provides API endpoints with a stable IP address that allows applications to discover pods running a particular application. Services update when changes are made to pods, so they maintain an up to date list of pods running an application.

Virtual Machines

A basic unit of computing resources. GCP offers preconfigured VMs with varying numbers of vCPUs and amounts of memory. You can create a custom configuration if the preconfigured offerings don't meet your needs. You can create multiple VMs running on different OS and applications. VMs are abstractions of physical servers, they are essentially programs that emulate physical servers and provide CPU, memory, storage, and other services.

Kubernetes Cluster

A collection of nodes (Compute Engine VMs). A Kubernetes Cluster includes one master node and one or more worker nodes.

Kubernetes Container Orchestration

A collection of tasks within Kubernetes Engine that involves running containers on a cluster of VMs, where to run containers, monitoring the health of containers, and managing the full life-cycle of VM instances.

Cloud SDK

A command-line interface for managing GCP resources, including VMs, disk storage, networking firewalls, and virtually any other resource you might deploy in GCP. Cloud SDK has client libraries for Java, Python, Node.js, Ruby, GO, .NET, and PHP. The Cloud SDK is available as a Docker Image which is a really easy and clean way to work with it.

Kubernetes Pod Template

A definition of how to run a pod.

Cloud DNS

A domain name service provided in GCP. Cloud DNS is aa high availability, low-latency, service for mapping from domain names, such as example.com to IP addresses. Designed to automatically scale so customers can have thousands and millions of addresses without concern for scaling the underlying infrastructure. Also provides for private zones that allow you to create custom names for your VMs if you need those.

Cloud Functions Receiving Events from Pub/Sub

A function can be executed each time a message is written to a Pub/Sub topic. You can use Cloud Console or gcloud commands to deploy functions triggered by a Cloud Pub/Sub event. Messages are stored in a text format, base64, so that binary data can be stored in the message in a text format

Region

A geographical location, such as asia-east1, europe-west2, and us-east4. The zones within a region are linked by low-latency, high-bandwidth network connections.

Nearline Storage

A good option for when data needs to be kept for extended periods of time but is rarely accessed. It costs less than regional or multiregional storage and is optimized for infrequent access. There are costs with retrieving data stored in Nearline Storage. Nearline storage is designed for use cases in which you expect to access files less than once per month.

Kubernetes Job

A job is an abstraction about a workload. Jobs create pods and run them until the application completes a workload. Job specifications are specified in a configuration file and include specifications about the container to use and what command to run.

Coldline Storage

A low-cost archival storage designed for high durability and infrequent access. Suitable for data that is accessed less than once per year and has at least a 90-day minimum storage. There are costs associated for retrieving data stored in Coldline Storage.

Regional Managed Instance group

A managed instance group that has deployed instances across a region. Regional managed instance groups are recommended because that configuration spreads the workload across zones, increasing resiliency.

Zone Managed Instance group

A managed instance group that has deployed instances in a single zone.

Apigee API Platform

A management service for GCP customers providing API access to their applications. Allows developers to deploy, monitor, and secure their APIs. It also generates API proxies baed on the Open API specification. It is difficult to predict load on an API, and sometimes spikes can occur. For those times, the Apigee API platform provides routing and rate-limiting based on policies customers can define. APIs can be authenticated using either OAuth 2.0 or SAML. Data is encrypted both in transit and at rest in the Apigee API platform.

Cloud Armor

A network security service that gives you the ability to allow or restrict access based on IP address, predefined rules to counter cross-site scripting attacks, ability to counter SQL injection attacks, ability to define rules at both level 3 (network) and level 7 (application). It also allows and restricts access based on the geolocation of incoming traffic.

App Engine Code Viewer

Grants read-only access to all application configurations, settings, and deployed source code. The role name used in gcloud commands is roles/appenging.codeViewer.

Cloud Functions Events

A particular action that happens in Google Cloud, such as a file is uploaded to Cloud Storage or a message (called as topic) is written to a Pub/Sub message queue. There are different kinds of actions associated with events. Currently, GCP supports events in five categories: - Cloud Storage: Events include uploading, deleting, or archiving a file. - Cloud Pub/Sub: Events include the publishing of a message. - HTTP: Allows developers to invoke a function by making HTTP request using POST, GET, PUT, DELETE, and OPTIONS calls. - Firebase, Events are actions taken in the Firebase database such as database triggers, remote configuration triggers, and authentication triggers. - Stackdriver Logging, You can setup a function to respond to a change in Stackdriver Logging by forwarding log entries to a Pub/Sub topic and triggering a response from there.

Cloud Functions

A platform for running code in response to an event such as uploading a file to Cloud Storage or adding a message to the message queue. Cloud Functions work well when you need to respond to an event by running a short process coded in a function or by calling a longer-running application that might be running on a VM, managed cluster, or App Engine. This computing service is not designed to execute long-running code. Cloud Functions will automatically scale as load increases. Cloud Functions is often used to call other services, such as a third-party API or other GCP services, like natural language translation.

Kubernetes Pods

A pod is a logically single unit for providing a service. Containers are deployed and scaled as a unit. Pods contain at least one container, but can run multiple containers. They usually run on a single container, but can run multiple containers. Multiple containers are used when two or more containers must share resources. Pods also use shared networking and storage across containers. Each pod gets a unique IP address and set of ports. Containers connect to a port. Multiple containers in a pod connect to different ports and can talk to each other on a localhost. This structure is designed to support running one instance of an application within the cluster as a pod. A pod allows its containers to behave as if they are running on an isolated VM, sharing common storage, one IP address, and a set of ports. By doing this you can deploy multiple instances of the same application, or different instances of different applications on the same node or different nodes, without having to change their configuration. Pods treat containers as a single entity for management purposes. Pods are generally created in groups, called replicas. Pods support autoscaling and are considered ephemeral - they are expected to terminate.

Stackdriver Notification Channels

A policy can have one or more notification channels. Channels include email notifications as well as Slack, Google Cloud Console (mobile), and popular DevOps tools such as PagerDuty, HipChat, and Campfire. The documentation parameter is optional but recommended. The documentation will be included in notifications, which can help DevOps engineers understand the problem and provide information on how to resolve the issue.

Creating Stackdriver Policies to Monitor Metrics

A policy consists of conditions that determine when to issue an alert or notification, for example, when CPU utilization is greater than 80 percent for more than 5 minutes. Policies also include notification channels and optional documentation. The condition will check the CPU utilization status. It will be applied to VMs that match the filter criteria, for example, any VM with a label included in the filter. The filter criteria include VM features such as zone, region, project ID, instance ID, and labels. The Group By parameter allows you to group time series, or data that is produced at regular intervals and has a fixed format, for example, by zone, and aggregate the values so there are fewer time series to display. This is especially helpful, for example, if you want to have a group of VMs in a cluster appear as a single time series. Agents send data from monitored resources to Stackdriver in streams. To perform checks on the streamed data, the data points need to be aggregated at specific time intervals. For example, data points may be received every 20 seconds but for monitoring purposes, we check the average CPU utilization per minute. Consider a stream of CPU utilization metrics that come to Stackdriver over a 1-minute period. In addition to aligning time series, when you aggregate, you can specify a reducer, which is a function for combining values in a group of time series to produce a single value. The reducers include common statistics, such as sum, min, max, and count. You need to specify when a condition should trigger. This could be anytime you see a value that exceeds the specified threshold for an extended period of time. For example, you may only want to trigger an alert on CPU utilization if it is above a threshold for more than five minutes.

Roles

A role is a collection of permissions. Roles are granted to users by binding a user to a role. Permissions cannot be assigned to users, they can be assigned only to roles that have permissions. There are three types of roles in GCP: - Primitive Roles - Predefined Roles - Custom Roles

Compute Engine

A service that allows users to create VMs, attached persistent storage to those VMs, and make use of other GCP services like Cloud Storage. There are discounts applied when a VM is run for more than 25% of the month.

Stackdriver

A service that collects metrics, logs, and event data from applications and infrastructure and integrates the data so DevOps engineers can monitor, assess, and diagnose operational problems.

Cloud Interconnect

A set of GCP services for connecting your existing networks to the Google network. Cloud Interconnect offers two types of connections: Interconnect and Peering

Instance Group Template

A specification of a VM configuration, including machine type, boot disk image, zone, labels, and other properties of an instance. To create an instance group, you must first create an instance group template. You can specify an existing VM as the source of the instance template. GCP will use an n1-standard-1 image by default. Instance group templates can be created via gcloud or the console. Instance groups can contain instances in a single zone or across a region.

Startup Script in VMs

A start up script can be specified to run when an instance starts. Copy the contents of the startup script to the script text box.

Object Storage

A system that manages the use of storage in terms of objects or blobs. Usually these objects are files that are grouped into buckets. Each object is individually addressable by a URL. Object Storage is not limited by the size of disks or SSDs attached to a server Objects can be uploaded with out concern for the amount of space available on a disk. Multiple copies of objects are stored to improve availability and durability. In some cases copies of objects may be stored in different regions to ensure availability even if a region becomes inaccessible. Object Storage is serverless, there is no need to attach VMs and attach storage to them. GCP's Object Storage is called Cloud Storage and is accessible from servers running on GCP and other devices with internet access. Access controls can be applied at the object level, this allows users of Cloud Storage to control which users can access and update objects. It takes longer to. retrieve data from object storage than it does from block storage.

Cloud Marketplace

A tool for quickly deploying functional software packages on Google Cloud Platform. There's no need to manually configure the software, virtual machine instances, storage or network settings.. GCP updates the base images for these software packages to fix critical issues and vulnerabilities. But it doesn't update the software after it's been deployed. Fortunately, you'll have access to the deployed systems, so you can maintain them.

Cloud Pub/Sub Topic

A topic is a structure where applications can send messages.

Cloud Functions Triggers

A trigger is a way of responding to a Cloud Functions event.

Zone

A zone is a data center that may be compromised of one or more closely coupled data centers. Zones are located within regions.

Custom Roles

Allow cloud administrators to create and administer their own roles. Custom roles can only be used at the project or organization levels and not the folder level. Custom roles are assembled using permissions defined in IAM. While you can use most permissions in a custom role, some are not available in custom roles.

Serverless Computing

Allows developers and application administrators to run their code in a computing environment that does not require setting up VMs or Kubernetes Clusters. Serverless Computing options in GCP include App Engine and Cloud Functions.

Cloud CDN

Allows users anywhere to request content from systems distributed in various regions. CDNs enable low-latency response to these requests by caching content on a set of endpoints across the globe. CDNs are especially important for sites with large amounts of static content and a global audience. News sites, for example, could use the Cloud CDN service to ensure fast response to requests from any point in the world.

Manual Scaling App Engine

Allows you to control scaling by specifying the number of instances to run.

Kubernetes Expose

Allows you to expose a service on a port.

Image Family

Allows you to group images, when a family is specified, the latest nondeprecated image in the family is used.

Peering

Allows you to share data and network access between an on-premise data center and your VPC. There are several types of peering available with your VPC in GCP.

Sole Tenancy tab in VMs

Allows you to specify labels regarding sole tenancy for the server. If you need to ensure that your VMs run on a server with your other VMs, then you can specify sole tenancy.

Boot Disk tab in VMs

Allows you to specify whether the boot disk should be deleted when the instance is deleted. You can also select how you would like to manage encryption keys for the boot disk. By default, Google manages those keys.

Labels in VMs

Along with description are often used to help manage your VMs and understand how they are being used. It is a best practice to include labels and a description for all VMs.

Cloud Memorystore

An in-memory cache service. A managed Redis service for caching frequently used data in memory. Cloud Memorystore allows users to specify the size of a cache while leaving administration tasks to Google. GCP ensures high availability, patching, and automatic failover.

APIs Explorer

An interactive tool that lets you easily try Google APIs using a browser. With the APIs explorer, you can: - Browse quickly through available APIs and versions - See methods available for each API and what parameters they support along with inline documentation. - Execute requests for any method and see responses in real time. - Easily make authenticated and authorized API calls.

App Engine Components

App Engine Standard Applications consists of four components: - Application - Service - Version - Instance Each project can have one App Engine application. All resources associated with an App Engine app are created in the region specified when the app is created. Apps have at least one service, which is code executed in the App Engine Environment. Multiple versions of an applications code can exist, this supports versioning of Apps. A service can have multiple versions, and these are usually slightly different, with newer versions incorporating new features, bug fixes, and other changes relative to earlier versions. When a version executes, it creates an instance of the app. Services are typically structured to perform a single function with complex applications made up of multiple services, known as microservices. One microservice may handle API requests for data access, while another micro service performs authentication and a third records data for billing purposes.

App Engine Structure

App Engine applications consist of services. Services provide a specific function, like computing sales tax in a retail web application or updating inventory as products are sold on a site. Services have versions and this allows multiple versions to run at one time. Each version of a service runs on an instance that is managed by App Engine. The number of instances used to provide an application depends on your configuration for the application and the current load on the application. Autoscaling is possible with dynamic instances. Resident instances on App Engine run continually and can be added/removed manually. GCP allows users to setup daily spending limits as well as create budgets and set alarms for costs.

Cloud Pub/Sub Subscription

Applications read messages from Pub/Sub by using a subscription.

Persistent Disk HDDs

Applications that require large amounts of persistent disk storage but can tolerate longer read and write times can use HDDs to meet their storage requirements.

Identities

Are abstractions about users of services, such as human user. After an identity is authenticated by logging in the authenticated user can access resources and perform operations based on the privileges granted to that identity.

Internal IP Addresses

Are accessible to only services in your internal GCP network.

Static IP Addresses

Are assigned for extended periods of time.

Ephemeral IP Addresses

Are attached to VMs and released when the VM is stopped.

Shielded Virtual Machines

Are configured to have additional security mechanisms that you can choose to run. They include the following: - Secure Boot: Ensures that only authenticated OS software runs on the VM. It does this by checking the digital signatures of the software. If a signature check fails, the boot process will halt. - Virtual Trusted Platform (vTPM): A TPM is a specialized computer chip designed to protect security resources, like keys and certificates. - Integrity Monitoring: Uses a known good baseline of boot measurements to compare to recent boot measurements. If the check fails, then there is some difference between the baseline measurement and the current measurements.

Kubernetes Persistent Volumes

Are durable disks that are managed by Kubernetes and implemented using the Compute Engine persistent disks.

Kubernetes Stateful Sets

Are like deployments, but they assign unique identifiers to pods. This enables Kubernetes to track which pod is used by which client and keeps them together. Stateful Sets are used when an application needs a unique network identifier or stable persistent storage.

Persistent Disk SSDs

Are often used for low-latency applications where persistent disk performance is important.. SSDs cost more than HDDs.

Stackdriver Workspace

Are resources for monitoring and can support up to 100 monitored projects. Workspaces contain dashboards, alerting policies, group definitions, and notification checks.

Instance Groups

Are sets of VMs that are managed as a single entity and have the same configuration. Any gcloud or console command applied to an instance group is applied to all members of the instance group. Google provides two types of instance groups: managed and unmanaged instance groups.

Graphical Processing Unit (GPU)

Are used for math intensive applications such as visualizations and machine learning. GPUs perform math calculations and allow some work to be off-loaded from the CPU to the GPU.

SSH Keys in VMs

Are used to give users project-wide access to VMs. You can block that behavior at the VM level if you use project-wide SSH keys and do not want all project users to have access to this machine.

Persistent SSDs

Are used when high throughput is important. SSDs provide consistent performance for both random access and sequential access patterns.

Deploying and Managing Bigtable

As a Cloud Engineer, you may need to create a Bigtable cluster, or set of servers running Bigtable services, as well as create the tables, add data, and query that data. Much of the work you will do with Bigtable is done at the command line. To create a table, open a Cloud Shell browser and install the cbt Bigtable requires an environment variable called instance to be set by including it in a .cbt configuration file called .cbtrc, which is kept in the home directory. For example, to set the instance to ace-exam-bigtable, enter this command at the command-line prompt: echo instance = ace-exam-bigtable >> ~/.cbtrc Now cbt commands will operate on that instance. To create a table, issue a command such as this: cbt createtable ace-exam-bt-table The ls command lists tables: cbt ls Tables contain columns, but Bigtable also has a concept of families. To create a column family called colfaml, use the following command: cbt createfamily ace-exam-bt-table colfaml To set a value of the cell with the column colfaml in a row called row1, use the following command: cbt set ace-exam-bt-table row1 colfam1:col1=ace-exam-value To display the contents of a table, use a read command such as this: cbt read ace-exam-bt-table

Command for creating a Shared Virtual Private Cloud (VPC)

Before executing commands to create a shared VPC, you will need to assign an org member the Shared VPC Admin role at the organization level or the folder level. To assign the Shared VPC Admin role, which uses the descriptor roles/compute.xpnAdmin, you would issue this command: gcloud organizations add-iam-policy-binding [ORG_ID] --member='user:[EMAIL_ADDRESS]' --roles="roles/compute.xpn.Admin" Once you have the set the Shared VPC Admin role at the organization level, you can issue the shared-vpc command: gcloud compute shared-vpc enable [HOST_PROJECT_ID]

Configuring BigQuery

Big Query is a managed analytics service, which provides storage plus query, statistical, and machine learning analysis tools. Big Query does not require you to configure instances. The first task for using BigQuery is to create a data set to hold data. You do this by clicking Create Dataset to display a form. When creating a dataset, you will have to specify a name and select a region in which to store it. Not all regions support BigQuery.

Importing and Exporting Data to Big Query

BigQuery users can export and import tables using Cloud Console and the command line. Via the console you have two export options: Google Cloud Storage or Data Studio, which is an analysis tool in GCP. Selecting Cloud Storage displays a form where you enter the bucket name to store the export file. You choose a file format, the options are CSV, Avro, and JSON. Choose a compression type. The options are None or Gzip for CSV and "deflate" and "snappy" for Avro.

Cloud Functions Limits

By default, the functions will timeout after one minute, although you can set the timeout for as long as 9 minutes. Cloud Functions can have between 128MB and 2GB of memory allocated. The default is 256MB.

Kubernetes Node

Kubernetes nodes are Compute Engine VMs that are part of a cluster. The default VM type is n1-standard-1, but you can specify a different machine type when creating the cluster.

Google API Client Libraries

Open source, generated Support various languages - Jave, Python, Javascript, PHP, .NET, Go, Node.js, Ruby, Objective-C, Dart

Managing IP Addresses

CIDR blocks define a range of IP addresses that are available for use in a subnet. If you need to increase the number of addresses available, for example, if you need to expand the size of clusters running in a subnet, you can use the gcloud compute networks subnets expand-ip-range command. For example, to increase the number of addresses in ace-exam-subnet1 to 65536, you set the prefix length to 16: gcloud compute networks subnet expand-ip-range ace-exam-subnet1 --prefix-length 16 The expand-ip-range command is used only to increase the number of addresses. You cannot decrease them, though. You would have to re-create the subnet with a smaller number of addresses.

Creating a Virtual Private Network (VPC) with Subnets

CP automatically creates a VPC when you create a project. You can create additional VPCs and modify the VPCs created by GCP. VPCs are global resources, so they are not tied to a specific region or zone. VPCs contain subnetworks, called subnets, which are regional resources. Subnets have a range of IP addresses to communicate with each other and with Google APIs and services. You can also create a shared VPC within an organization, the shared VPC is hosted in a common project. Users in other projects who have sufficient permissions can create resources in the shared VPC.

Caches

Caches are in-memory data stores that maintain fast access to data. The latency of im-memory data stores is designed to be submillisecond. Caches are quite helpful when you need to keep read latency to a minimum in your application. Memory in a cache is more expensive than SSD or HDD storage. Caches are volatile, you lose the data stored in the cache when power is lost or the OS is rebooted. A cache should never be used as the only data source for storing data, some form of persistent storage should be used to maintain a data store that always has the latest and most accurate version of the data. Caches can get out of sync with the system of truth, this happens when the system of truth is updated but the new data is not written to the cache.

Autoscalers

Can add/remove VMs from a cluster based on the workload, this is called autoscaling. This helps control costs by not running more VMs than needed and also ensures that sufficient computing capacity is available when workloads increase.

Cloud Storage: Lifecycle Management

Can automatically manage objects based on policies you define. For example, you could define a policy that moves all objects more than 60 days old in a bucket to Nearline Storage or delete any object in a Coldline Storage bucket that is older than 5 years. Lifecycle management policies are applied to buckets and affect all objects in the bucket. You can delete an object or change its storage class. Both unversioned and versioned objects can be deleted. If the live version of a file is deleted, then instead of actually deleting it, the object is archived. If an archived version of an object is deleted, the objects is permanently deleted. Multiregional and regional storage objects can be changed to nearline or coldline. Nearline can be changed only to coldline.

Firewall Rules

Can be configured to to limit inbound and outbound traffic to the IP address of the application server or load balancer in front of the application cluster.

External IP Addresses

Can be either static or ephemeral. Static addresses are assigned to a device for extended periods of time. Ephemeral external IP addresses are attached to VMs and released when the VM is stopped.

Specialized Services

Can be used as building blocks of applications or as part of a workflow for processing data. Specialized services commonly are serverless, provide a specific function such as translating text or analyzing images, and provide an API to access the functionality of the service. Some of the specialized services in GCP are: AutoML - a machine learning service Cloud Natural Language - a service for analyzing text Cloud Vision - a service for analyzing images Specialized services encapsulate advanced computing capabilities and make them accessible to developers who are not experts in the domains provided.

Billing Account Creator

Can create new self-service billing accounts.

Importing and Exporting Data for Cloud Bigtable

Cloud Bigtable does not have an Export and Import option in the Cloud Console or in gcloud. You have two options: using a Java application for importing and exporting or using HBase interface to execute HBase commands. To export a Bigtable table, you will need to download a JAR file, which is a compiled program for the Java VM. To import data, you can use the same JAR file, but you will need to specify import instead of export in the command.

Configuring Cloud DNS

Cloud DNS is a Google service that provides domain name resolution. At the most basic level, DNS services map domain names such as example.com, to IP addresses, such as 35.20.24.107. A managed zone contains DNS records associated with a DNS name suffix, such as aceexamdns1.com DNS records contain specific details about a zone. For example, an A record maps a hostname to IP addresses in IPv4. AAAA records are used in IPv6 to map names to IPv6 addresses. CNAME records hold the canonical name, which contains alias names of a domain.

Deploying and Managing Cloud Dataproc

Cloud Dataproc is Google's managed Apache Spark and Apache Hadoop service. Spark supports analysis and machine learning, while Hadoop is well suited to batch, big data applications. To create a cluster, navigate to the Dataproc part of Cloud Console. Create a Dataproc cluster by filling in the Create Cluster form. You will need to specify the name of the cluster and a region and zone. You'll also need to specify the cluster mode, which can be singled node, standard, or high availability. Single node is useful for development. Standard has only one master node, so if it fails, the cluster becomes inaccessible. The high availability mode uses three masters. You will also need to specify the machine configuration information for the master nodes and the worker nodes. You'll specify CPUs, memory, and disk information. The cluster mode determines the number of master nodes, but you can choose the number of worker nodes. If you choose to expand the list of advanced options, you can indicate you'll like to use preemptible VMs and specify a number of preemptible VMs to run. In addition to using the console you can create a cluster using the gcloud dataproc clusters command. Here's an example: gcloud dataproc clusters create cluster-bc3d --zone us-west2-a You use the gcloud dataproc jobs command to submit jobs from the command line. Here's an example: gcloud dataproc jobs submit spark --cluster cluster-bc3d --jar ace_exam_jar.jar This will submit a job running the ace_exam_jar.jar program on the cluster-bc3d cluster.

Importing and Exporting Data for Cloud Dataproc

Cloud Dataproc is not a database like Cloud SQL or Bigtable; rather, it is a data analysis platform. Cloud Dataproc is not designed to be a persistent store of data. For that you should use Cloud Storage or persistent disks to store the data files you want to analyze. Cloud Dataproc does have Import and Export commands to save and restore cluster configuration data. The command to export a Dataproc cluster configuration is as follows: gcloud dataproc clusters export [CLUSTER_NAME] --destination=[PATH_TO_EXPORT_FILE] To import a configuration file, use the import command: gcloud dataproc clusters import [SOURCE_FILE]

Cloud Debug

Cloud Debug is an application debugger for inspecting the state of a running program. Cloud Debug allows developers to insert log statements or take snapshots of the state of an application. The service is enabled by default on App Engine and can be enabled for Compute Engine and Kubernetes Engine. In the interface, you can click a line of code to have a snapshot taken when that line executes. You can also inject a logpoint, which is a log statement that is written to the log when the statement executes. Cloud Debug is used to take snapshots of the status of a program while it executes, and log points allow developers to inject log messages on the fly without altering source code.

Configuring Cloud Firestore

Cloud Firestore is a managed database service that does not require you to configure instances. You do, however, have to choose a data storage system. The options include using Datastore, using Firestore in Datastore mode (which uses the Datastore storage system), or using Firestore in native mode. New Firestore users should use Firestore in native mode. After selecting the storage system, you will be prompted to select a location for the database

Cloud Functions Use Cases

Cloud Functions is well suited to short-running, event-based processing. If your workloads upload, modify, or otherwise alter files in Cloud Storage or use message queues to send work between services, then the Cloud Functions service is a good option for running code that starts the next steps in processing. Some application areas that fit this pattern include the following: - Internet of Things (IoT), in which a sensor or other device can send information about the state of a sensor. - Mobile applications that like IoT apps, send data to the cloud for processing. - Asynchronous workflows in which each step starts at some time after the previous steps completes, but there are no assumptions about when the processing steps will complete.

Deploying a Solution Using Cloud Marketplace

Cloud Marketplace is a central repository of applications and data sets that can be deployed to your GCP environment. Working with the Cloud Marketplace is a two-step process: browsing for a solution that fits your needs and then deploying the solution.

Backing Up MySQL in Cloud SQL

Cloud SQL enables both on-demand and automatic backups. To create an on-demand backup, click the name of the instance on the Instances page on the console. Click the Backups tab to display the Create Backup option. You can also create an on-demand backup using the gcloud sql backups command, which has this form: gcloud sql backups create --async --instance [INSTANCE_NAME] You can also have Cloud SQL automatically create backups. From the console, navigate to the Cloud SQL Instance page, click the name of the instance, and then click Edit Instance. Open the Enabled Auto Backups section and fill in the details of when to create the backups. You must specify a time range for when automatic backups should occur. You can also enable binary logging, which is needed for more advanced features, such as point-in-time recovery. To enable automatic backups from the command line, use the gcloud command: gcloud sql instances patch [INSTANCE NAME] -backup-start-time [HH:MM]

Relational Storage in GCP

Cloud SQL, Cloud Spanner, and Big Query Relational databases support frequent queries and updates to data. Relational databases, like Cloud SQL and Cloud Spanner, support database transactions. Cloud SQL and Cloud Spanner are used when data is structured and modeled for relational databases. Cloud SQL is used for databases that do not need to scale horizontally, that is by running servers with more memory and more CPU. Cloud Spanner is used when you have extremely large volumes of relational data or data that needs to be globally distributed while ensuring consistency and transaction integrity across all servers. Large enterprises often use Cloud Spanner for applications like global supply chains and financial services applications, while Cloud SQL is often used for web applications, business intelligence, and ecommerce applications. BigQuery is a service designed for a data warehouse and analytic applications. BigQuery is designed to store petabytes of data. BigQuery works with large numbers of rows and columns of data and is not suitable for transaction-oriented applications, such as ecommerce or support for interactive web applications.

Importing and Exporting Data to Cloud Spanner

Cloud Spanner users can import and export data using Cloud Console. Click Export to show the Export form, you will need to enter a destination bucket, the database to export, and a region to run the job. Notice, you need to confirm that there will be charges for running Cloud Dataflow, and there may be data egress charges for data sent between regions. To import data, click the Import tab to display the Import form. You will need to specify a source bucket, a destination database, and a region to run a job. Cloud Spanner does not have a gcloud command to export data, but you can use Dataflow to export data.

Object Storage in GCP

Cloud Storage Object storage is used when you need to store large volumes of data and do note need fine-grained access to data within an object while it is in the object store. This data model is well suited for archived data, machine learning training data, and old Internet of Things (IoT) data that needs to be saved but is no longer actively analyzed.

Cloud Storage

Cloud Storage is GCP's object storage system. Objects can be any type of file or binary large object. Objects are organized into buckets. It is important to remember that buckets share a global namespace, so each bucket name must be globally unique. Cloud Storage is not a file system, it is a service that receives, stores, and retrieves files or objects from a distributed storage system. Cloud Storage is not part of a VM in the way an attached persistent disk is. Cloud Storage is accessible from a VM or any other network device with appropriate privileges. Each stored object in Cloud Storage is uniquely addressable by a URL GCP users and other can be granted permission to read and write objects to a bucket. Cloud Storage is useful for storing objects that are treated as single units of data. For example, an image file is a good candidate for object storage. If you write or retrieve an object all at once and you need to store it independently of servers that may or may not be running at any time, then Cloud Storage is a good option. There are different classes of Cloud Storage: Regional, Multiregional, Nearline, and Coldline Both regional and multiregional storage are used for frequently used data.

Cloud Functions Receiving Events from Cloud Storage

Cloud Storage is GCP's object storage. This service allows you to store files in containers known as buckets. When files are created, deleted, or archived, or their metadata changes, an event can invoke a function. The Cloud Functions API needs to be enabled before a Cloud Function can be created.

Cloud Trace

Cloud Trace is a distributed tracing system for collecting latency data from an application. This helps developers understand where applications are spending their time and to identify cases where performance is degrading. From the Cloud Trace console, you can list traces generated by applications running in a project. Traces are generated when developers specifically call Cloud Trace from their applications. Cloud Trace is a distributed tracing application that helps developers and DevOps engineers identify sections of code that are performance bottlenecks.

Container Manager

Coordinates containers running on the same server within Kubernetes Clusters. Ensures isolation between running containers.

Kubernetes Node Pools

Collection of nodes that all have the same configuration. They are instance groups in a Kubernetes Cluster.

Cloud Client Libraries

Community-owned, hand crafted client libraries

Use Cases for Compute Engine Virtual Machines

Compute Engine is a good option for when you need maximum control over VM instances. With Compute Engine you can do the following: - Choose the specific image to run on the instance - Install software packages or custom libraries - Have fine-grained control over which users have permissions on the instance - Have control over SSL certificates and firewall rules for the instance - Provides the least amount of management relative to other computing services in GCP.

Managed Instance Groups

Consist of groups of identical VMs, they are created using an instance template, which is a specification of a VM configuration including machine type, boot disk image, zone, labels, and other properties of an instance. Managed instance groups can automatically scale the number of instances in a group and be used with load balancing to distribute workloads across the instance group. If an instance group crashes, it will be recreated automatically. Managed instance groups are the preferred type of instance groups.

Constraints of Resources

Constraints are restrictions on services. GCP has list constraints and Boolean constraints. List constraints are lists of values that are allowed or disallowed for a resource. For example: -Allow a specific set of values -Deny a specific set of values -Deny a value and all its child values -Allow all allowed values -Deny all values Boolean constraints evaluate true or false statements and determine whether the constraint is applied or not. For example, if you want to deny access to serial ports on VMs, you can set constraints/compute.disableSerialPortAccess to True.

Containers

Containers are like lightweight VMs that isolate processes running in one container from processes running in another container on the same server. Containers are good options when you need to run applications that depend on multiple micro services running in your environment. The services are deployed through containers and GCP takes care of monitoring, networking, and some security managements tasks. Containers can start and stop in seconds and use fewer resources in comparison to VMs.

Custom Images

Custom Images are especially useful if you have to configure an operating system and install additional software on each instance of a VM that you run. Instead of repeatedly configuring and installing software for each instance, you could configure and install once and then create a custom image from the boot disk of the instance.

Custom Machine Types

Custom machine types can have between 1 and 96 vCPU and up to 6.5 GB of memory per vCPU. The price of a custom machine type is based on the number of vCPUs and the memory allocated.

NoSQL Storage in GCP

Datastore, Cloud Firestore, and Bigtable NoSQL databases do not use the relational model and do not require a fixed structure or schema.

Configuring Cloud Datastore

Datastore, like BigQuery, is a managed service that does not require you to specify node configurations. When creating an entity, you specify a namespace, which is a way to group entities much like schemas group tables in a relational database. You will need to specify a kind, which is analogous to a table in a relational database. Each entity requires a key, which can be an auto generated numeric key or a custom-defined key. Next, you will add one or more properties that have names, types, and values. Types include string, date and time, Boolean, and other structured types like arrays.

Deleting a VM

Deleting a VM removes it from the Cloud Console and releases resources, like the storage used to keep the VM image when stopped.

Kubernetes Deployment

Deployments are sets of identical pods. The members of the set may change as some pods are terminated and others are started, but they are all running the same application. The pods all run the same application because they are created using the same pod template. Deployments are well suited to stateless applications. Those are applications that do not need to keep track of their state. For example, an application that calls an API to perform a calculation on the input values does not need to keep tack of previous calls or calculations.

Cloud Bigtable

Designed for petabyte-scale applications that can manage up to billions of rows and thousands of columns. It is based on a NoSQL model known as wide-column data model. Wide-column databases, as the name implies, store tables that can have a large number of columns. Not all rows need to use all columns, so in that way iit is like Datastore -- neither require a fixed schema to structure the data. Bigtable is suited for applications that require low-latency write and read operations. It is designed to support millions of operations per second. Bigtable integrates with other Google Cloud services, such as Cloud Storage, Cloud Pub/Sub, Cloud Dataflow, and Cloud Dataproc. It also supports the Hbase API, which is an API for data access in the Hadoop big data ecosystem. Bigtable also integrates with open source tools for data processing, graph analysis, and time-series analysis. Bigtable runs in clusters and scales horizontally. Bigtable is designed for applications with high data volumes and a high-velocity ingest of data. Time series, IoT, and financial applications all fall into this category.

Networking

Each network-accessible device or service in your device will need an IP address. Devices within GCP can have both internal and external addresses.

Billing Account User

Enables a user to link projects to billing accounts.

Billing Account Viewer

Enables a user to view billing account costs and transactions.

Kubernetes Nodes

Execute the workloads that run on a Kubernetes Cluster. Nodes are VMs that run containers configured to run an application. Nodes are primarily controlled by the cluster master but some commands can be run manually. The nodes run an agent called kubelet which is the service that communicates with the cluster master.

Ephemeral Disk

Exist and store data only as long as the VM is running. Store OS files and other files and data that are deleted when the VM is shut down.

External Load Balancers

External load balancers distribute traffic from the Internet. The HTTP(S), SSL Proxy, TCP Proxy, and Network TCP/UDP load balancers are all external.

File Storage

File Storage services provide a hierarchical storage system for files. GCP has a file storage service called Cloud Filestore. File Storage is suitable for applications that require operating system-like file access to files. The file storage system decouples the file system from specific VMs. The file system, its directories, and its files exist independent of VMs or applications.

Creating Firewall Rules for a Virtual Private Cloud (VPC)

Firewall rules are defined at the network level and used to control the flow of network traffic to VMs. Firewall rules allow or deny a kind of traffic on a port. It is important to note that the firewall is stateful which means if traffic is allowed in one direction and established, it is also allowed in the other direction. Firewall rulesets are stateful so if a connection is allowed, like establishing a SSH connection on port 22, then all later traffic matching this rule is permitted as long as the connection is active. An active connection is one with at least one packet exchanged every ten minutes.

Load Balancing and Autoscaling on Instance Groups

GCP offers a number of types of load balancing, and they'll all require use of an instance group. Managed instance groups can be configured to autoscale. You can configure an autoscaling policy to trigger adding or removing instances based on CPU utilization, monitoring metrics, load-balancing capacity, or queue-based workloads.

Organization Policies

GCP provides an Organization Policy Service. This service controls access to an organization's resources. The Organization Policy Service lets you specify limits on the ways resources can be used. Organization policies are defined in terms of constraints on a resource.

Structure of Firewall Rules

Firewall rules consist of several components - Direction: Either ingress or egress Priority: Highest-priority rules are applied. Priority is specified by an integer from 0 to 65535. 0 is the highest priority, and 65535 is the lowest. - Action: Either allow or deny. Only one can be chosen. - Target: An instance to which the rule applies. Targets can be all instances in a network, instances with particular network tags, or instances using a specific service account. - Source/destination: Source applies to ingress rules and specifies source IP ranges, instances with particular network tags, or instances using a particular service account. You can also use combinations of source IP ranges and network tags and combinations of source IP ranges and service accounts used by instances. The IP address 0.0.0.0/0 indicates any IP address. The destination parameter only uses IP ranges. - Protocol and port: A network protocol such as TCP, UDP, or ICMP and a port number. If no protocol is specified, then the rule applies to all protocols. - Enforcement status: Firewall rules are either enabled or disabled. Disabled rules are not applied even if they match. Disablings are sometimes used to troubleshoot problems with traffic getting through when it should not or not getting through when it should. - All VPCs start with two implied rules: One allows egress traffic to all destinations, and one denies all incoming traffic from any source. You cannot delete an implied rule.

Deploying an App Engine Application using Cloud Shell and SDK

First, you will work in a terminal window using Cloud Shell, make sure gcloud is configured to work with App Engine by using the following command: gcloud components install app-engine-python This will install or update the App Engine Python library as needed. You can change your working directory to the directory with the app you are using. You can assign a custom domain URL for the App on the App Engine Settings page.

Configuring Cloud Bigtable

From Cloud Console, navigate to Bigtable and click Create Instance. This will display a form where you will need to provide an instance name and an instance ID. Next, choose between production or development mode. Production clusters have a minimum of three nodes and provide for high-availability. Development mode uses low-cost instances without replication or high availability. You will also need to choose either SSD or HDD for persistent disks by the database. Bigtable can support multiple clusters. For each cluster you will need to specify a cluster ID, a region and zone location, and the number of nodes in the cluster. The cluster can be replicated to improve availability.

Creating and Connecting to a MySQL Instance

From the console, navigate to SQL and click Create Instance. Choose MySQL and select Second Generation Instance type. You will select an Instance ID, Root password, Locations such as Region and Zone. After the database is created, you can connect by starting Cloud Shell and using gcloud sql connect command. It is a good practice to not specify a password in the command line. To connect to the instance called ace-exam-mysql, use the following command: gcloud sql connect ace-exam-mysql -user=root

Cloud Functions Runtime Environment

Functions run in their own environment. Each time a function is invoked, it is run in a separate instance from all other invocations. There is no way to share information between invocations of functions using only Cloud Functions. If you need to coordinate the updating of data, such as keeping a global count, or need to keep information about the state of functions, such as the name of last event processed, then you should use a database, such as Cloud Datastore, or a file in Cloud Storage.

Data Analytics

GCP has a number of services designed for analyzing big data in batch and streaming modes: BigQuery - A petabyte-scale analytics database service for data warehousing. Cloud Dataflow - A framework for defining batch and stream processing pipelines. Cloud Dataproc - A managed Hadoop and Spark service. Cloud Dataprep - A service that allows analysts to explore and prepare data for analysis.

Types of Storage Systems

GCP has several storage services, including the following: - A managed Redis cache service for caching - Persistent disk storage for use with VMs - Object storage for shared access to files across resources - Archival storage for long-term, infrequent access requirements

Cloud Functions Execution Envronment

GCP manages everything that is needed to execute your code in a secure, isolated environment. The functions execute in a secure, isolated execution environment Compute resources scale as needed to run as many instances of Cloud Functions as needed without having to do anything to control the scaling. The execution of one function is independent of all others. The lifecycles of Cloud Functions are not dependent on each other.

MemoryStore

GCP offers Memorystore, a managed Redis service. Caches are usually used with an application that cannot tolerate long latencies when retrieving data. For example, an application that has to read from a hard disk drive might have to wait 80 times longer than if the data were read from an in-memory cache. When you use Memorystore, you create instances that run Redis. The instance is configured with 1GB to 300GB of memory. It can also be configured for high availability, in which case Memorystore creates failover replicas.

Databases

GCP provides several database options, some are relational databases and some are NoSQL databases. Some are serverless and others require users to manage clusters of servers. GCP users must understand their application requirements before choosing a service, and this is especially important when choosing a database.

Identity Management

GCP's IAM service enables customers to define fine-grained access controls on resources in the cloud. IAM uses the concepts of users, roles, and privileges.

Cloud SQL

GCP's managed relational database service that allows users to setup MySQL or PostgreSQL databases on VMs without having to attend to database administration tasks, such as backing up a database or patching database software. Includes management of replication and allows for automatic failover, providing for highly available databases. Relational databases are well suited to applications with relatively consistent data structure requirements . For example, a banking database that may track account numbers, customer names, addresses, and so on..

Global Load Balancers

Global load balancers are used when an application is globally distributed There are three global load balancers: - HTTP(S), which balances HTTP and HTTPS load across a set of backend instances. - SSL Proxy, which terminates SSL/TLS connections, which are secure socket layer connections. This type is used for non-HTTPS traffic. - TCP Proxy, which terminates TCP sessions at the load balancer and then forwards traffic to backend servers.

VPC Global Routing

Global routing will enable Google Cloud Routers to learn routes on all subnetworks in the VPC.

Cloud Load Balancing

Google provides global load balancing to distribute workloads across your cloud infrastructure. Using a single multicast IP address, Cloud Load Balancing can distribute the workload within and across regions, adapt to failed or degraded servers, and autoscale your compute resources to accommodate changes in workload. Cloud Load Balancing also supports internal load balancing, so no IP addresses need to be exposed to the Internet to get the advantages of load balancing. Cloud Load Balancing can load-balance HTTP, HTTPS, TCP/SSL, and UDP traffic.

Cloud Spanner

Google's globally distributed relational database that combines the key benefits or relational databases, such as strong consistency and transactions, with the ability to scale horizontally like a NoSQL database. Spanner is a highly available database with a 99.999 SLA making it a good option for enterprise applications that demand scalable, highly available relational database services. Cloud Spanner also has enterprise-grade security with encryption at rest and encryption in transit, along with identity-based access controls. Cloud Spanner supports ANSI 2011 standard SQL.

App Engine Deployer

Grants read-only access to application configuration and settings and write access to create new versions. Users with only the App Engine Deployer role cannot modify or delete existing versions. The role name used in gcloud commands is roles/appengine.deployer.

App Engine Viewer

Grants read-only access to application configuration and settings. The role name used in gcloud commands is roles/appengine.appViewer.

App Engine Service Admin

Grants read-only access to configuration settings and write access to module-level and version-level settings. The role name used in gcloud commands is roles/appengine.serviceAdmin.

Persistent HDDs

HDDs have longer latencies but cost less, so HDDs are a good option when storing in large amounts of data and performing batch operations that are less sensitive to disk latency than interactive applications.

Partner Interconnect

If an organization cannot achieve a direct interconnect with a Google facility, it could use Partner Interconnect. This service depends on a third-party network provider to provide connectivity between the company's data center and a Google facility.

Defining Custom IAM Roles

If the set of predefined IAM roles does not meet your needs, you can define a custom role. You can specify a name for the custom role, a description, an identifier, a launch stage, and a set of permissions. The launch stage options are as follows: Alpha, Beta, General Availability, and Disabled. You can click Add Permissions to display a list of permissions. Not all permissions are available for use in a custom role.

Creating Custom Stackdriver Metrics

If there is an application-specific metric you would like to monitor, you can create custom metrics. Custom metrics are like predefined metrics, except you create them. The names of custom metrics start with custom.googleapis.com/, so they are easy to recognize by name. There are two ways to create custom metrics: Using OpenCensus, an open source monitoring library (https://opencensus.io/) or using Stackdriver's Monitoring API. OpenCensus provides a higher-level, monitoring-focused API, while the Stackdriver Monitoring API is lower-level. When you define a custom metric, you will need to specify the following: - A type name that is unique within the project - A project - A display name and description - A metric kind, such as gauge, delta, or cumulative metric. Gauges are measures at a point in time, deltas capture the change over an interval, and cumulative are accumulated values over an interval. - Metric labels - Monitored resource objects to include with time series data points. These provide the context for a measurement. For example, you could include an application instance ID with an application-specific metric.

Splitting Traffic Between App Engine Versions

If you have more than one version of an application running, you can split traffic between the versions. App Engine provides three ways to split traffic: - IP address - HTTP cookie - Random selection.

Costs of Virtual Machines

If you want to track costs automatically, you can enable Cloud Platform billing and setup Billing Export. This will produce daily reports on usage and costs of VMs. The following are the most important to remember about VM costs: - VMs are billed in 1-second increments - The cost is based on machine type. The more CPUs and memory used, the higher the cost. - Google offers discounts for sustained usage. - VMs are charged for a minimum 1 minute of use. - Preemptible VMs can save you up to 80% of the cost of a VM

Deployment Manager Template Files

If your deployment configurations are becoming complicated, you can use deployment templates. Templates can be written in Python or Jinja2, a templating language. Google recommends using Python to create template files unless the templates are relatively simple, in which case it is appropriate to use Jinja2.

Images

Images are similar to snapshots in that they're copies of disk contents. The difference is that snapshots are used to make data available on a disk, while images are used to create VMs. Images can be created from the following: -Disk -Snapshot -Cloud Storage file -Another Image When you create an image you specify a name, description (optional, and label (optional). Images have an optional attribute called Family, which allows you to group images. When a family is specified, the latest, nondeprecated image in the family is used. After you have created an image, you can delete it or deprecate it only if it is a custom image and not GCP supplied images. Delete, removes the image while deprecated marks the image as no longer supported.

Importing and Exporting Data for Cloud Datastore

Importing and exporting data from Datastore is done through the command line. Datastore uses a namespace data structure to group entities that are exported. You will need to specify the name of the namespace used by the entities you are exporting.

Persistent Storage

In GCP, persistent disks provide durable block storage. Persistent disks can be attached to VMs in Google Compute Engine (GCE) and Google Kubernetes Engine (GKE). Persistent disks are not directly attached to physical servers hosting your VMs but are network accessible. VMs can have locally attached solid-state drives (SSDs), but the data on those drives is lost when the VM is terminated. The data on persistent disks continues to exist after VMs are shut down and terminated. Persistent disks exist independently of virtual machines.

Estimating the Cost of Queries in BigQuery

In the console, choose BigQuery from the main navigation menu to display the BigQuery query interface. You can then enter a query into the Query Editor In the lower-right hand corner BigQuery provides an estimate as to how much data will be scanned. You can also use the command line to get this estimate by using the bq command with the --dry-run option. bq --location=[LOCATION] query --use_legacy_sql=false --dry_run [SQL_QUERY] You can use this number of data scanned with the pricing calculator to estimate the cost

App Engine: Flexible Environment

In the flexible environment, you run Docker containers in the App Engine environment. With Docker files you can specify a base OS image, additional libraries and tools, and custom tools. The flexible environment works well in cases where you have application code but also need libraries or other third-party software installed. It is a good option when you can package your application and services into a small set of containers. These containers can be autoscaled according to load. As the name implies, the flexible environment gives you more options, including the ability to work with background processes and write to local disk. There will always be at least one container running with your service, and you will be charged for that time even if there is no load on the system.

App Engine: Standard Environment

In the standard environment, you run applications in a language-specific sandbox, so your application is isolated from the underlying servers OS as well as other applications running on that server. The standard environment is well suited to applications that are written in one of the supported languages and do not need OS packages or other compiled software that would have to be installed along with the application code. Supported Languages are Java, Python, PHP, Node.js, and Go. There are no running instances when there is no load.

Primitive Roles

Include Owner, Editor, and Viewer. These are the basic privileges that can be applied to most resources. It is a best practice to use predefined roles instead of primitive roles when possible. Primitive roles grant a wide range of permissions that may not always be needed by a user.

Kubernetes Engine

Is designed to allow users to easily run containerized applications on a cluster of servers (VMs). With containers, processes and resources are isolated using features of the host OS. With this approach, there is no need for a hypervisor as the host OS maintains isolation. A Container Manager is used and it coordinates containers running on the server. No additional, or guest OSs run on top of the container manager. Containers make use of the host OS functionality, while the OS and Container Manager ensure isolation between running containers. Kubernetes Engine allows users to describe the compute, storage, and memory resources they'd like to run their services. Kubernetes Engine then provisions the underlying resources. It's easy to add/remove resources from a Kubernetes Cluster using a command-line interface or graphical user interface. With Kubernetes, you can administer the cluster, specify policies such as autoscaling, and monitor cluster health.

Scaling App Engine Applications

Instances are created to execute an application on an App Engine managed server. App Engine can automatically add or remove instances as needed based on load. When instances are scaled based on load, they are called dynamic instances. These dynamic instances help optimize your costs by shutting down when demand is low. Alternatively, you can configure your instances to be resident or running all the time. These are optimized for performance so users will wait less while an instance is started. Your configuration determines whether an instance is resident or dynamic. If you configure auto scaling or basic scaling, then instances will be dynamic. If you configure manual scaling, then your instances will be resident.

Starting and Stopping Instances

Instances can be stopped in the Google Cloud Console or via the use go gcloud with the following command: gcloud compute instances stop ace-instance-1 When an instance is stopped it is no longer consuming compute resources, so you will not be charged. The instance still exists and can be started again when needed.

Virtual Machine Images

Instances run on images, which contain operating systems, libraries, and other code. You may choose to run a public image provided by Google, both Windows and Linux images are available. You can also run public images provided by open-source projects or third-party vendors. You can also create a custom image from a boot disk or by starting with another image in the even that a public image does not meet your needs.

App Engine Resident Instances

Instances that are running all the time. These are optimized for performance so users will wait less while an instance is started.

App Engine Dynamic Instances

Instances that are scaled based on load.

Internal Load Balancers

Internal load balancers distribute traffic that originates within GCP. The Internal TCP/UDP load balancer is the only internal load balancer.

App Engine HTTP Cookies splitting

Is useful when you want to assign users to versions. The preferred way to split traffic is with a cookie. When you use a cookie, the HTTP request header for a cookie named GOOGAPPUID contains a hash value between 0 and 999. With cookie splitting, a user will access the same version of the app even if the users IP address changes. If there is no GOOGAPPUID cookie, then traffic is routed randomly.

Viewing Jobs in Big Query

Jobs in BigQuery are processes used to load, export, copy, and query data. To view the status of jobs, navigate to the BigQuery console and click Job History in the menu to the left. This will display a list of jobs and their status You could also view the status of a Big Query jobs by using the bq show command. For example: bq --location=US show -j gpace-project:US.bquijob_119adae7_167c373d5c3

Regional Storage

Keeps copies of objects in a single cloud region. Regional storage is well suited for applications that run in the same region and need low latency access to objects in Cloud Storage.

Metadata Tags in VMs

Key-value pairs associated with an instance. Are especially useful if you have a common script you want to run on startup or shut down but want the behavior of the script to vary according to some metadata values.

Kubernetes Engine Use Cases

Kubernetes Engine is a good choice for large-scale applications that require high availability and high reliability. Kubernetes Engine supports the concept of pods and deployment sets, which allow application developers and administrators to manage services as a logical unit. This can help if you have a set of services that support a user interface, another set that implements business logic, and a third set that provides backend services. Each of these different groups of services can have different lifecycles and scalability requirements.

Deploying Kubernetes Clusters

Kubernetes clusters can be deployed using either Cloud Console or the command line in Cloud Shell, or your local environment if Cloud SDK is installed. To use Kubernetes Engine, you will first need to enable the Kubernetes Engine API.

Kubernetes Containers

Kubernetes deploys containers in pods. Containers within a single pod share storage and network resources. Containers within a pod share and IP address and port space. A pod is a logically single unit for providing a service. Containers are deployed and scaled as a unit.

Kubernetes Functionality

Kubernetes is designed to support clusters that run a variety of applications. Kubernetes provides the following functions: - Load Balancing across Compute Engine VMs that are deployed in a Kubernetes Cluster. - Automatic scaling of nodes (VMs) in the cluster. - Automatic upgrading of cluster software as needed. - Node monitoring and health repair. - Logging - Support for node pools, which are collections of nodes all with the same configuration.

Configuring Load Balancers

Load balancers can distribute load within a single region or across multiple regions. The several load balancers offered by GCP are characterized by three features. - Global versus regional load balancing - External versus internal load balancing - Traffic type, such as HTTP and TCP HTTP and HTTPS traffic needs to use external global load balancing. TCP traffic can use external global, external regional, or internal regional load balancers. UDP traffic can use either external regional or internal regional load balancing.

Managed Kubernetes Clusters

Managed Clusters make use of containers. In a managed cluster you can specify the number of servers you'd like to run and the containers that should run on them. You can also specify autoscaling parameters to optimize the number of containers running. In a managed cluster, the health of containers is monitored for you. If a cluster fails GCP will detect it and start another cluster for you.

Billing Account Administrator

Manages billing accounts but cannot create them.

Configuring MemoryStore

Memorystore caches can be used with applications running in Compute Engine, App Engine, and Kubernetes Engine. To configure a Redis cache in Memorystore, you will need to specify an instance ID, a display name, and a Redis version. You can choose to have a replica in a different zone for high availability by selecting the Standard instance tier. The Basic instance tier does not include a replica but costs less. You will need to specify a region and zone along with the amount of memory to dedicate to your cache. The cache can be 1GB to 300GB in size. The Redis instance will be accessible from the default network unless you specify a different network. The advanced options for Memorystore allow you to assign labels and define an IP range from which the IP address will be assigned.

Creating Alerts Based on Resource Metrics

Metrics are defined measurements on a resource collected at regular intervals. Metrics return aggregate values, such as the maximum, minimum, or average value of the item measured, which could be CPU utilization, amount of memory used, or number of bytes written to a network interface. To monitor and collect metrics, you need to install the Stackdriver agent for monitoring. Since you are installing the monitoring agent, you'll install the logging agent at the same time because you'll need that later. VMs with agents installed collect monitoring and logging data and send it to Stackdriver. Stackdriver needs a Workspace to store the data. If a Workspace does not exist for your project, you will need to create one or add the project to an existing Workspace. Stackdriver will mail daily or weekly reports if you opt email to have them emailed to you.

Cloud Storage for Firebase

Mobile app developers may find Cloud Storage for Firebase to be the best combination of cloud object storage and the ability to support uploads and downloads from mobile devices with sometimes unreliable network connections. The Cloud Storage for Firebase API is designed to provide secure transmission as well as robust recovery mechanisms to handle potentially problematic network quality. Once files like photos or music recordings, are uploaded into Cloud Storage, you can access those files through the Cloud Storage command-line interface and software development kits (SDKs).

Configuring Cloud Spanner

Navigate to Cloud Spanner and select Create Instance. You will need to provide an instance name, instance ID, and number of nodes. You will also have to choose either a regional or multiregional configuration to determine where nodes and data are located. This will determine costs and replication storage location. If you select regional, you will choose from the list of available regions. Cloud Spanner is significantly more expensive than Cloud SQL or other database options.

Deploying Application Pods

Once you have created a cluster, you can now deploy an application using pods. When you create a deployment you can specify the following: - Container image - Environment variables - Initial command - Application name - Labels - Namespace - Cluster to deploy to - The output is always in YAML format

Kubernetes High Availability

One way Kubernetes maintains cluster health is by shutting down pods that become starved for resources. Kubernetes supports something called eviction policies that set thresholds for resources. When a resource is consumed beyond the threshold, then Kubernetes will start shutting down pods. Kubernetes provides for high reliability is by running multiple identical pods. A group of running identical pods is called a deployment. The identical pods are referred to as replicas. When deployments are rolled out they can be in one of three states: - Progressing, which means the deployment is in the process of performing a task. - Completed, which means the rollout of containers is complete and all pods are running the latest version of containers. - Failed, which indicates the deployment process encountered a problem it could not recover from.

Folder

Organizations contain folders and folders contain other folders or other projects. A single folder may contain both folders and projects.

Persistent Disk

Persistent Disks are a storage service that are attached to VMs in Compute Engine or Kubernetes Engine. Provide Block Storage on SSDs and HDDs. An advantage of persistent disks on GCP is that these disks support multiple readers without a degradation in performance. This allows for multiple instances to read a single copy of data. Disks can also be resized as needed while in use without the need to restart your VMs. Continues to exist and store data even if it's detached from a virtual server or the server to which it is attached shuts down. Are used when you want data to exist on a block storage device independent of the VM. These disks are good options when you have data that you want available independent of the lifecycle of the VM, and support for fast OS system and file system access.

Features of Persistent Storage

Persistent disks are available in SSD and hard disk drive (HDD) configurations. Persistent disks can be mounted on multiple VMs to provide multireader storage. Snapshots of disks can be created in minutes, so additional copies of data on a disk can be distributed for use by other VMs. If a disk created from a snapshot is mounted to a single VM, it can support both read and write operations. The size of persistent disks can be increased while mounted to a VM. Both SSD and HDD disks can be up to 64 TB. Persistent disks automatically encrypt data on the disk.

Kubernetes Pod Replicas

Pods are generally created in groups called replicas. Replicas are copies of pods and constitute a group of pods that are managed as a unit.

Policy Evaluation

Policies are inherited and cannot be disabled or overriden by objects lower in the hierarchy. Multiple policies can be in effect for a folder or project.

Project

Projects are the basis for enabling and using GCP services like managing APIs, enabling billing and adding and removing collaborators and enabling other Google services. Each project is a separate compartment and each resource belongs to exactly one. Anyone with the resourcemanager.projects.create IAM permission can create a project. Your organization will have a quota of projects it can create. The quota varies based on typical use, the customer's usage history, and other factors decided by Google.

Load Balancers

Provide a single access point to a distributed backend. This is useful when you need to have high availability for your application. If one VM in your cluster fails, the workload can be directed to another VM in the cluster.

Predefined Roles

Provide granular access to resources in GCP and they are specific in GCP products. By using predefined roles you can grant only the permissions a user needs to perform their function. Predefined roles are grouped by service.

Platform as a Service (PaaS)

Provides a runtime environment to execute applications without the need to manage underlying servers, networks, and storage systems. App Engine and Cloud Functions are GCP's PaaS offerings.

Cloud Filestore

Provides a shared file system for use with Compute Engine and Kubernetes Engine. Filestore can provide high numbers of input-output operations per second (IOPS) as well as variable storage capacity. File system administrators can configure Cloud Filestore to meet their specific IOPS and capacity requirements. Filestore implements the Network File System (NFS) protocol so system administrators can easily mount shared file systems on virtual servers.

Multiregion Storage

Provides for storing replicas of objects in multiple Google Cloud regions, which is important for high availability, durability, and low latency. Allows for faster access to data when users or applications are distributed across regions.

App Engine IP address splitting

Provides some stickiness, so a client is always routed to the same split, at least as long as the IP address does not change. When using IP address splitting, App Engine creates a hash, that is, a number generated based on an input string between 0 and 999, using the IP address of each version. This can create problems if users change IP address, such as if they start working with the app in an office and then switch to a network in the coffee shop. If state information is maintained in a version, it may not be available after an IP address change.

Log File for Python Cloud Functions

Python functions should be saved in a log file called main.py

Regional Load Balancers

Regional load balancers are used when resources providing an application are in a single region. The regional load balancers are as follows: - Internal TCP/UDP, which balances TCP/UDP traffic on private networks hosting internal VMs - Network TCP/UDP, which enables balancing based on IP protocol, address, and port. This load balancer is used for SSL and TCP traffic not supported by the SSL Proxy and TCP Proxy load balancers, respectively.

VPC Regional Routing

Regional routing will have Google Cloud Routers learn routes within the region.

Regional Persistent Disks

Replicate data blocks across two zones within a region but is more expensive than zonal storage.

Deletion Protection in VMs

Requires extra confirmation when someone tries to delete an instance.

Kubernetes Pod Statuses

Running: indicates the pod is running Pending: indicates the pod is downloading images Succeeded: indicates the pod terminated successfully Failed: indicates at least one container failed Unknown: indicates the master cannot reach the node and status cannot be determined

Service Account Scopes

Scopes are permissions granted to a VM to perform some operation. Scopes authorize the access to API methods. The service account assigned to a VM has roles associated with it. To configure access controls for a VM, you will need to configure both IAM roles and scopes. A scope is specified using a URL that starts with https://www.googleapis.com/auth/ and is then followed by permission on a resource. For example, the scope allowing a VM to insert data into BigQuery as follows: https://www.googleapis.com/auth/bigquery.insertdata An instance can only perform operations allowed by both IAM roles assigned to the service account and scopes defined on the instance.

Creating DNS Managed Zones Using Cloud Console

Select a zone type, which can be public or private. Public zones are accessible from the Internet. These zones provide name servers that respond to queries from any source. Private zones provide name services to your GCP resources, such as VMs and load balancers. Private zones respond only to queries that originate from resources in the same project as the zone. You can enable DNSSEC, which is DNS security. It provides strong authentication of clients communicating with DNS services. DNSSEC is designed to prevent spoofing (a client appearing to be some other client) and cache poisoning (a client sending incorrect information to update the DNS server). If you choose to create a Private Zone, you will need to specify the networks that will have access to the private zone. When a zone is created, NS and SOA records are added. NS is a name server record that has the address of an authoritative server that manages the zone information. SOA is a start of authority record, which has authoritative information about the zone. You can add other records, such as A and CNAME records. The TTL, known as time to live, and TTL Unit parameters specify how long the record can live in a cache. DNS Forwarding is now available, which allows your DNS queries to be passed to an on-premise DNS server if you are using Cloud VPN or Interconnect.

Service Account

Service Accounts are given to applications or VMs so that they can act on behalf of a user or perform operations that the user does not have permissions to perform. Services accounts are sometimes treated as resources and also identities. There are two types of service accounts, user-managed service accounts and Google-managed service accounts. User can create up to 100 service accounts per project. When you create a project that has the Compute Engine API enabled, a Compute Engine service account is created automatically. The Compute Engine service account will be granted editor role in the project where it is created. Service accounts can be managed as a group of accounts at the project level or at the individual service account level. Service accounts use an email and cryptographic keys. You can change the permissions of Service Accounts without having to recreate a VM.

Managing Service Accounts

Service accounts are used to provide identities independent of human users. Service accounts are identities that can be granted roles. Service accounts are assigned to VMs, which then use the permissions available to the service accounts to carry out tasks.

App Engine Services & Versions

Services are defined by their source code and their configuration file. The combination of those files constitutes a version of the app. If you slightly change the source code or configuration file, it creates another version. You can maintain multiple versions of your application at one time, which is especially helpful for testing new features on a small number of users before rolling the change out to all users. If bugs or other problems occur with a new version, you can easily roll back to an earlier version. An advantage of keeping multiple versions is that they allow you to mitigate and split traffic.

Preemptible Virtual Machines

Short-lived compute instances suitable for running workloads for applications that perform financial modeling, rendering, big data, continuous integration, and web crawling options. These VMs can persist for up to 24 hours. If an application is fault-tolerant and can withstand possible instance interruptions then using preemptible VMs can help reduce Google Compute Engine costs significantly.

Unmanaged Instance Groups

Should be used only when you need to work with different configurations within different VMs within a group.

Snapshots

Snapshots are copies of data on a persistent disk. You use snapshots to save data on a disk so you can restore it. This is a convenient way to make multiple persistent disks with the same data. The first time you create a snapshot, GCP will make a full copy of the data on the persistent disk. The next time you create a snapshot from that disk, GCP will copy only the data that has changed since the last snapshot. To work with snapshots a user must be assigned the Compute Storage Admin role. You can add labels to a snapshot that may indicate the type of data on the disk and the application that uses the data. If you are making a snapshot of a disk on a Windows Serve, check the Enable VSS box to create an application-consistent snapshot without having to shut down the instance.

Kubernetes Rolling Update

Specify parameters to control rolling updates to deployed code. The parameters include the minimum number of seconds to wait before considering the pod updated, the maximum number of pods above target size allowed, and the maximum number of unavailable pods.

Logging with Stackdriver

Stackdriver Logging is a service for collecting, storing, filtering, and viewing log and event data generated in GCP and in Amazon Web Services. Logging is a managed service, so you do not need to configure or deploy servers to use the service. Stackdriver Logging retains log data for 30 days. The process of copying data from Logging to a storage system is called exporting, and the location to which you write the log data is called a sink. You can create a log sink by navigating to the Logging section of Cloud Console and selecting the Exports option from the Logging menu. You can make up a sink name and the sink service in one of the following: - BigQuery - Cloud Storage - Cloud Pub/Sub - Custom Destination

Monitoring Kubernetes

Stackdriver is GCP's comprehensive monitoring, logging, and alerting product. It can be used to monitor Kubernetes Clusters. When creating a cluster be sure to enable Stackdriver monitoring and logging. Before being able to monitor resources via Stackdriver a workspace will need to be created.

Monitoring with Stackdriver

Stackdriver is a service for collecting performance metrics, logs, and event data from our resources. Metrics include measurements such as the average percent CPU utilization over the past minute and the number of bytes written to a storage device in the last minute.

Reserving IP Addresses

Static external IP addresses can be reserved using Cloud Console or the command line. When reserving an IP address, you will need to specify a name and optional description. You may have the option of using the lower-cost Standard service tier for networking, which uses the Internet for some transfer of data. The Premium tier routes all traffic over Google's global network. You will also need to determine whether the address is in IPv4 or IPv6 and whether it is regional or global. You can attach the static IP address to a resource as part of the reservation process, or you can keep it unattached. Reserved addresses stay attached to a VM when it is not in use and stay attached until released. This is different from ephemeral addresses, which are released automatically when the VM shuts down. To reserve an IP address using the command line, use the gcloud command gcloud compute addresses create. For example, to create a static IP address in the us-west2 region, which uses the Premium tier, use this command: gcloud compute addresses create ace-exam-reserved-static1 --region=us-west2 --network-tier=PREMIUM

Billing Accounts

Store information about how to pay charges for resources used. A billing account is associated with one or more projects. All projects must have billing accounts unless they only use free services. You may have one or multiple billing accounts. There are two types of billing accounts: - Self-serve: Are paid by credit card or direct debit from a bank account, the costs are charged automatically. - Invoice: Bills or invoices are sent to customers. This type of account is commonly used by enterprises or large customers.

Filtering VMs in Cloud Console

The Cloud Console allows you to filter VMs by the following: -Labels -Internal IP -External IP -Status -Zone -Network -Deletion protection -Member of managed instance group -Member of unmanaged instance group

Billing Budgets and Alerts

The GCP billing service includes an option for defining a budget and setting billing alerts. Budgets are associated with billing accounts and not projects. One or more projects can be linked to a billing account, so budgets and alerts should be based on what you expect to spend on all projects. With a budget you can set three alert percentages or more. When a percentage of a budget is spend billing administrators and billing account users will be notified by email.

IP Range of 0.0.0.0/0

The IP range of 0.0.0.0/0 allows traffic from all source IP addresses.

Latency

The amount of time it takes to retrieve data.

Creating Firewall Rules Using gcloud

The command for working with firewall rules from the command line is gcloud compute firewall-rules. With this command, you can create, delete, describe, update, and list firewall rules. There are a number of parameters used with gcloud compute firewall-rules create: --action --allow --description --destination-ranges --direction --network --priority --source-ranges --source-service-accounts --source-tags --target-service-accounts --target-tags For example, to allow all TCP traffic on ports 20000 to 25000, use this: gcloud compute firewall-rules create ace-exam-fwr2 --network ace-exam-vpc1 --allow tcp:20000-25000

Kubectl

The command to interact with a cluster.

Kubernetes API Server

The coordinator for all communications to the cluster.

Kubernetes Pod Specification

The description of how to define the pod is called a pod specification. Kubernetes uses this definition to keep a pod in the state specified in the template. That is, if the specification has a minimum number of pods that should be in the deployment and the number falls below that, then additional pods will be added to the deployment by calling on a Replica Set.

VPC Dynamic Routing

The dynamic routing option determines what routes are learned.

Command to create a Virtual Private Network (VPC)

The gcloud command to create a VPC is gcloud compute networks create. For example, to create a VPC in the default project with automatically generated subnets, you would use the following command: gcloud compute networks create ace-exam-vpc1 --subnet-mode=auto You can also configure custom subnets by creating a VPC network specifying the custom option and then creating subnets in that VPC. gcloud compute networks create ace-exam-vpc1 --subnet-mode=custom

Congfiguring Load Balancers using gcloud

The gcloud compute forward-rules command is used to forward traffic that matches an IP address to the load balancer. gcloud compute forwarding-rules create ace-exam-lb --port=80 --target-pool ace-exam-pool This command routes traffic to any VM in the ace-exam-pool to the load balancer called ace-exam-lb. Target pools are created using the gcloud compute target-pools create command. Instances are added to the target pool using the gcloud compute target-pools add-instances command. For example, to add VMs ig1 and ig2 to the target pool called ace-exam-pool, use the following command: gcloud compute target-pools add-instances ace-exam-pool --instances ig1, ig2

Streaming Data to Cloud Pub/Sub

The gcloud pubsub commands you will use are create, publish, and pull.

Cloud Marketplace Solutions

The license types are free, paid, and bring your own license (BYOL). Free operating systems include Linux and FreeBSD options. The paid operating systems include Windows operating systems and enterprise-supported Linux. You will be charged a fee based on your usage, and that charge will be included in your GCP billing. The BYOL option includes two supported Linux operating systems that require you to have a valid license to run the software. The pricing information includes the costs for running the solution, as configured, for one month, which includes the costs of VMs, persistent disks, and any other resources. The price estimate also includes discounts for sustained usage of GCP resources, which are applied as your reach a threshold based on the amount of time a resource is used.

Kubernetes Master Node

The master node manages the cluster. Cluster services such as the Kubernetes API server, resource controllers, and schedulers, run on the master. The master determines what containers and workloads are run on each node.

Kubernetes Pod Controller

The mechanism that manages scaling and health monitoring of pods.

Network Access to Virtual Machines

The most common way is to use SSH when logging into a Linux server or Remote Desktop Protocol (RDP) when logging into a Windows Server.

Basic Scaling App Engine

The only parameters for basic scaling are idle_timeout and max_instances

Principle of Least Privilege

The practice of assigning permissions that are needed and no more is known as the principle of least privilege and it is one of the fundamental best practices in information security.

Organization

The root of the resource hierarchy and typically corresponds to a company or organization. G-suite domains and Cloud Identity accounts map to GCP organizations. If your company does not use G-suite, you can use Cloud Identity, Google's Identity as a Service (IDaaS) offering. When an Organization is created, all users in that organization are granted Project Creator and Billing Account Creator roles.

Command for defining a Custom IAM Role

The structure of that command is as follows: gcloud iam roles create [ROLE-ID] --project [PROJECT-ID] --title [ROLE-TITLE] \ --description [ROLE-DESCRIPTION] --permissions [PERMISSIONS-LIST] --stage [LAUNCH-STAGE] For example, to create a role that has only App Engine application update permission, you could use the following command: gcloud iam roles create customAppEngine1 --project ace-exam-project --title='Custom Update App Engine' \ --description='Custom update' --permissions=appengine.applications.update --stage=alpha

Storage Data Models

There are three broad categories of data models available in GCP: object, relational, and NoSQL. In addition, we will treat mobile optimized products like Cloud Firestore and Firebase as a fourth category, although these datastores use a NoSQL model.

Deploying and Managing Cloud Pub/Sub

There are two tasks required to deploy a Pub/Sub message queue: creating a topic and creating a subscription. You can create a topic from the Pub/Sub page in Cloud Console. To create a subscription, specify a subscription name and delivery type. Subscriptions can be pulled, in which the application reads from a topic, or pushed, in which the subscriptions write messages to an endpoint. If you want to use a push subscription, you will need to specify the URL of an endpoint to receive the message. Once a message is read, the application reading the message acknowledges receiving the message. Pub/Sub will wait the period of time specified in the Acknowledgement Deadline parameter. The time to wait can range from 10 to 600 seconds. You can also specify a retention period, which is the length of time to keep a message that cannot be delivered. After the retention period passes, messages are deleted from the topic.

Command to create a Subnet in a Virtual Private Cloud (VPC)

This command requires that you specify a VPC, the region, and the IP range. You can optionally turn on the Private Google Cloud Access and Flow Log settings by adding the appropriate flags. Here is an example command to create a subnet called ace-exam-vpc-subnet1 in the ace-exam-vpc1 VPC. This subnet is created in the us-west2 region with an IP range of 10.10.0.0/16. The Private IP Access and Flow Logs settings are turned on. gcloud compute networks subnets create ace-exam-vpc-subnet1 --network=ace-exam-vpc1 --region=use-west2 --range=10.10.0.0/16 --enable-private-ip-google-access --enable-flow-logs

Attaching GPUs to an Instance

To add a GPU instance, you must start an instance in which GPU libraries have been installed or will be installed. You must also verify that the instance will run in a zone that has GPUs available. The options for attaching GPUs are None, 1, 2, or 4. The CPUs must be compatible with GPUs and the GPU cannot be attached to shared memory machines. Also, if you add a GPU to a VM, you must set the instance to terminate during maintenance.

Backing Up Datastore

To back up a Datastore database, you need to create a Cloud Storage bucket to hold a backup file and grant appropriate permissions to users performing backup. You can create a bucket for backups using the gsutil command: gsutil mb gs://[BUCKET_NAME]/ Users creating backups need the datastore.databases.export permission. If you are importing data, you will need datastore.databases.import. The Cloud Datastore Import Export Admin role has both permissions. he user with the Cloud Data Store Import Export Admin role can now make a backup using the following command: gcloud -namespaces='[NAMESPACE]' gs://[BUCKET_NAME] The command to create a backup is as follows: gcloud datastore export -namespaces='(default)' gs://ace_exam_backups To import a backup file, use the gcloud datastore import command: gcloud datastore import gs://[BUCKET]/[PATH]/[FILE].overall_export_metadata

Creating a DNS Managed Zone using gcloud

To create DNS zones and add records, you will use gcloud beta dns managed-zones and gcloud dns record-sets transaction. To create a managed public zone called ace-exam-zone1 with the DNS suffix aceexamzone.com, you use this: gcloud beta dns managed-zones create ace-exam-zone1 --description= --dns-name=aceexamzone.com --visibility=private --networks=default To make it a private zone you add the --visibility parameter set to private

Comman to create a Virtual Private Network

To create a VPN at the command line, you can use these three commands: - gcloud compute target-vpn-gateways - gcloud compute forwarding-rule - gcloud compute vpn-tunnels

Creating a Virtual Private Network (VPN) Using Cloud Console

To create a VPN using Cloud Console, navigate to the Hybrid Connectivity section of the console. Click Create VPN Connection to display a form. You specify a name and description of the VPN. In the section labeled Google Compute Engine VPN Gateway, you configure the GCP end of the VPN connection. This includes specifying a network, the region containing the network, and a static IP address. In the Tunnels section, you configure the other network endpoint in the VPN. You specify a name, description, and IP address of the VPN gateway on your network. In the Routing Options section, you can choose Dynamic, Route-Based, or Policy-Based Routing. Dynamic routing uses BGP protocol to learn routes in your networks. You will need to select or create a cloud router. If you have not created one. You'll also need to specify a private autonomous system number (ASN) used by the BGP protocol. The ASN is a number in the range 64512-65534 or 4000000000-4294967294. Each cloud router you create will need a unique ASN. If you choose route-based routing, you will need to enter the IP ranges of the remote network. If you choose policy-based routing, you will need to enter remote IP ranges, local subnetworks that will use the VPN, and local IP ranges.

Command to deploy a Cloud Function for Pub/Sub

To deploy a Cloud Function you use the gcloud functions deploy command. When deploying a Cloud Pub/Sub function, you specify the name of the topic that will contain messages that will trigger the function. gcloud functions deploy pub_sub_function_test --runtime python 37 --trigger-topic gcp-ace-exam-test-topic

Exporting a Cloud SQL database

To export a Cloud SQL database using the console, navigate to the Cloud SQL page of the console to list database instances. Open the instance detail page by double-clicking the name of the instance. Select the Export tab to show the export database dialog. You will need to specify a bucket to store the backup file. You will also need to choose SQL or CSV output. The SQL output is useful if you plan to import the data to another relational database. CSV is a good choice if you need to move this data into a non relational database.

Importing data to BigQuery

To import data into BigQuery, navigate to the BigQuery console and select a data set you'd like to import data into. Click a data set and then select the Create Table tab. The Create Table form takes several parameters, including an optional source table, a destination project, the dataset name, the table type, and the table name. You will also need to specify the file format of the file that will be imported. The options include CSV, JSON, Avro, Parquet, PRC, and Cloud Datastore Backup. Provide destination information, including project, dataset name, table type, and table name. Table type may be native type or external table. If the table is external, the data is kept in the source location, and only metadata about the table is stored in BigQuery.

Command to list information about nodes and pods

To list information about nodes and pods, use the kubectl command. First, you need to ensure you have properly configured kubeconfig file, which contains information on how to communicate with the cluster API. Run the command gcloud container clusters get-credentials with the name of a zone or region and the name of a cluster. Example: gcloud container clusters get-credentials --zone us-central1-a standard-cluster-1 This will configure the kubeconfig file on a cluster named standard-cluster-1 in the us-central1-a zone.

Setting Service Account Scopes in an Instance

To set scopes in an instance, navigate to the VM instance page in Cloud Console. Stop the instance if it is running. On the Instance Detail page, click the Edit link. At the bottom of the Edit page, you will see the Access Scopes section. The options are Allow Default Access, Allow Full Access To All Cloud APIs, and Set Access For Each API. Default access is usually sufficient. If you are not sure what to set, you can choose Allow Full Access, but be sure to assign IAM roles to limit what the instance can do. If you want to choose scopes individually, choose Set Access for Each API.

Auto Scaling App Engine

To specify automatic scaling, add a section to app.yaml that includes the term automatic_scaling followed by key-value pairs of configuration options. These include the following: - Target_cpu_utilization -- Specifies the maximum CPU utilization that occurs before additional instances are started. - Target_throughput_utilization -- Specifies the maximum number of concurrent requests before additional instances are started. This is specified as a number between 0.5 and 0.95. - Max_concurrent_requests -- Specifies the max concurrent requests an instance can accept before starting a new instance. The default is 10 and max is 80. - Max_instances -- Indicates the range of number of instances that can run for this application. - Min_instances -- Indicates the range of number of instances that can run for this application. - Max_pending_latency -- Indicates the maximum and minimum time a request will wait in the queue to be processed. - Min_pending_latency -- Indicates the maximum and minimum time a request will wait in the queue to be processed.

Command to test whether the Cloud Pub/Sub message queue is working

To test whether the message queue is working correctly, you can send the data to the topic using the following command: gcloud pubsub topics publish [TOPIC_NAME] --message [MESSAGE]

Command to write and read a message with Cloud Pub/Sub

To write a message to the topic and read it from the subscription you just created, you can use the following: gcloud pubsub topics publish ace-exam-topic1 --message "first ace exam message" gcloud pubsub subscription pull --auto-ack ace-exam-sub1

Cloud Functions Function

Triggers have an associated function. The function is passed as arguments with data about the event.. The function executes in response to the event.

Command for creating a Kubernetes Cluster

gcloud container clusters create cluster_name --num-nodes= 3 --region=us-central1

Organization Administrator

Users who are responsible for: -Defining the structure of the resource hierarchy -Defining identity access management policies over the resource hierarchy. -Delegating other management roles to other users.

Compute Engine Security Admin

Users with this role can create, modify, and delete SSL certificates and firewall rules.

Compute Engine Network Admin

Users with this role can create, modify, and delete most networking resources, and have read-only access to firewall rules and SSL certifications. This role does not give the user permission to create or alter instances.

Compute Engine Viewer

Users with this role can get and list Compute Engine resources but cannot read data from those resources.

Compute Engine Admin

Users with this role have full control over Compute Engine instances.

Block Storage

Uses a fixed-size data structure called a block to organize data. Block storage is commonly used in ephemeral and persistent disks attached to VMs. With a block storage system, you can install file systems on top of the block storage, or you can run applications that access blocks directly. Some relational databases(RDB) can be designed to access blocks directly rather than working through file systems. RDB's often write directly to blocks. Block storage is available on disks that are attached to VMs in GCP. Block storage can either be persistent or ephemeral. A persistent disk It takes longer to. retrieve data from object storage than it does from block storage.

Billing

Using resources such as VM's, object storage, and specialized services usually incurs charges. The GCP Billing API provides a way for you to manage how you pay for resources used.

Interconnect

Utilizes direct access to networks using Address Allocation for Private Internets standard to connect to devices in your VPC. A direct connection is maintained between an on-premise or hosted data center and one of Google's colocation facilities.

Creating a VM

VMs come in a range of predefined sizes, but you can also create a customized configuration. When you create a VM instance you can specify the number of parameters including the following: -The OS -Size of persistent storage -Adding graphical processing units (GPUs) for compute-intensive operations like machine learning (ML) -Making the VM preemptible

Hypervisor

VMs run within a low-level service called hypervisor. GCP uses a security hardened version of the KVM hypervisor. KVM stands for Kernel Virtual Machine and provides virtualization on Linux systems running on x86 hardware. Hypervisors run on an OS like Linux or Windows Server. Hypervisors can run multiple OS while keeping the activities of each isolated from other guest OSs. Each instance of an executing guest OS is a VM instance.

Command to create Virtual Private Network Peering for interproject traffic

VPC peering is implemented using the gcloud compute networks peerings create command. For example, you peer two VPCs by specifying peerings on each network. Here's an example: gcloud compute networks peerings create peer-ace-exam-1 \ --network ace-exam-network-A \ --peer-project ace-exam-project-A \ --peer-network ace-exam-network-A \ --auto-create-routes And then create peering on the other network using: gcloud compute networks peerings create peer-ace-exam-1 \ --network ace-exam-network-B \ --peer-project ace-exam-project-A \ --peer-network ace-exam-network-A \ --auto-create-routes This peering will allow private traffic to flow between the two VPCs

Peered Interconnect

VPN services that enable traffic to transmit between data centers and Google facilities using the public internet.

Creating a Virtual Private Network (VPN)

VPNs allow you to securely send network traffic from the Google network to your own network. You can create a VPN using Cloud Console or the command line.

Options for Creating a VM in Compute Engine

Virtual Machines in Compute Engine can be created via the Google Cloud Console, Google Cloud SDK, or Google Cloud Shell

Network Rules when a VPC is automatically created

When a VPC is automatically created, the default network is created with four network rules. The rules allow the following: - Incoming traffic from an VM instance on the same network - Incoming TCP traffic on port 22, allowing SSH - Incoming TCP traffic on port 3389, allowing Microsoft Remote Desktop Protocol (RDP) - Incoming Internet Control Message Protocol (ICMP) from any source on the network

Creating a Virtual Private Cloud with Cloud Console

When a VPC is created, subnets are created in each region. GCP chooses a range of IP addresses for each subnet when creating an auto mode network. Alternatively, you can create one or more custom subnets by selecting the Custom tab into the Subnet section. You can turn off Private Google Access. That allows VMs on the subnet to access Google services without assigning an external IP address to the VM. You can also turn on logging of network traffic by setting the Flow Logs option to on.

Adding a New Disk to Boot Disk in VMs

When adding a new disk you need to provide the following information: - Name of disk - Disk type, either standard or SSD persistent disk - Source image, if this is not a blank disk - Indication of whether to delete the disk when the instance is deleted - Size in gigabytes - How the encryption keys will be managed

Cloud Storage: Versioning

When versioning is enabled on a bucket, a copy of an object is archived each time the object is overwritten or when it is deleted. The latest version of the object is known as the live version. Versioning is useful when you need to keep a history of changes to an object or want to mitigate the risk of accidentally deleting an object.

Creating a Cloud Function in the Cloud Console

When you create a new function in the console you will need to specify: - Function name: The name GCP will use to refer to this function. - Memory allocated for the function: Amount of memory that will be available to the function. - Trigger: One of the defined triggers, such as HTTP, Cloud Pub/Sub, and Cloud Storage. - Event type - Source of the function code: There are several options to specifying where to find the source code, including uploading it, getting it from Cloud Storage or a Cloud Source repository, or entering the code in an editor. - Runtime: Indicates which runtime to use to execute the code. - Source code

Deploying Cloud Marketplace Solutions

When you launch a solution you will specify a name for the deployment, a zone, and the machine type, which is preconfigured. You must also specify an administrator email. You can optionally install a PHP tool called phpMyAdmin, which is helpful for administering WordPress and other PHP applications. You can choose the type and size of the persistent disk. If you wanted, you could opt for an SSD disk for the boot disk. You can also change the size of the boot disk. In the Networking section, you can specify the network and subnet to launch the VM. You can also configure firewall rules to allow HTTP and HTTPS traffic. You can choose to have an ephemeral external IP or no external IP. If you are hosting a website, choose an external address so the site is accessible from outside the GCP project. Static IP is not an option. You can also specify source IP ranges for HTTP and HTTPS traffic.

Managing Identity and Access Management (IAM)

When you work with IAM, there are a few common tasks you need to perform: Viewing account IAM assignments Assigning IAM roles Defining custom roles

Monitoring a Virtual Machine

While your VM is running you can monitor CPU, disk, and network load by viewing the Monitoring page in the VM Instance Details page.

App Engine

With App Engine, developers and application administrators don't need to concern themselves with configuring VMs or specifying Kubernetes clusters. Developers create applications in a popular programming language and deploy that code to a serverless applications. App Engine manages the underlying computing and network infrastructure, there is no need to configure VMs or harden networks to protect your application. Is used for applications and containers that run for extended periods of time, such as a website backend, point-of-sale system, or custom business applications. App Engine is available in two types: standard and flexible.

Using the Pricing Calculator

With the Pricing Calculator you can specify the configuration resources, the time they will be used, and, in the case of storage, the amount of data that will be stored. Other parameters can be specified too. Those will vary according to the service you are calculating chargers for. After you enter data into the form, the Pricing Calculator will generate an estimate. Different resources will require different parameters for an estimate. The Pricing Calculator allows you to estimate the price of multiple services and then generate a total estimate for all services.

Adding Data to a Datastore Database

You add data to a Datastore database by using the Entities option in the Datastore section of the console. After creating entities, you can query the document database using GQL, a query language similar to SQL.

Command for creating an instance to run in a particular subnet

You can also create an instance to run in a particular subnet using the gcloud compute instances create command with Subnet and Zone parameters.. gcloud compute instances create [INSTANCE_NAME] --subnet [SUBNET_NAME] --zone [ZONE_NAME]

Disabling an App Engine Application

You can also disable an entire application in the App Engine console, under Settings, by clicking the Disable App button.

Assigning a Service Account to a Virtual Machine Instance

You can assign a service account to a VM instance. First, create a service account by navigating to the Service Accounts section of the IAM & Admin section of the console. After specifying a name, identifier, and description, click Create. Once you have assigned the roles you want the service account to have, you can assign it to a VM instance.

Managing Cloud Storage

You can change a bucket's storage class manually using the gsutil rewrite command and specify the -s flag. It is not possible to change a bucket's storage class in the console. Another common task with Cloud Storage is moving objects between buckets, you can do this using the gsutil mv command. The move command can also be used to rename an object

Networking tab in VMs

You can see the network interface information, including the IP address of the VM. If you have multiple networks, you have the option of adding another network interface to that other network. This use of dual network interfaces can be useful if you are running some type of proxy or server that acts as a control for flow of some traffic between the networks. In addition, you can also add network tags.

Configuring Cloud SQL

You can create a Cloud SQL instance by navigating to Cloud SQL in the main menu of the console and selecting Create Instance. You will be prompted to choose either a MySQL or PostgreSQL instance. To configure a MySQL instance, you will need to specify a name, root password, region, and zone. The configuration options include: - MySQL version - Connectivity, where you can specify whether to use a public or private IP address - Machine type. The default is a db-n1-standard with 1 vCPU and 3.75 GB of memory - Automatic backups - Failover replicas - Database flags. These are specific to MySQL and include the ability to set a database read-only flag and set the query cache size - Setting a maintenance time window - Labels

Command to export a SQL database

You can create an export of a database using this command: gcloud sql export sql instance_name gs://bucket_name / file_name \ --database=database_name If you prefer to export a csv file instead of sql, you would just change the export sql to export csv in the command. If you wish to import just add import instead of export in the command.

Configuring Persistent Disks

You can create and configure persistent disks from the console by navigating to Compute Engine and selecting Disks. Persistent disks can be created blank or from an image or snapshot. Use the image option if you want to create a persistent boot disk. Use a snapshot if you want to create a replica of another disk. When creating a disk, you can choose to have Google manage encryption keys, in which no additional configuration is required. You could use GCP's Cloud Key Management Service to manage keys yourself and store them in GCP's key repository.

Configuring Cloud Storage

You can create buckets in Cloud Storage using the console. When creating a bucket, you need to supply some basic information, including a bucket name and storage class. You can optionally add labels and choose either Google-managed keys or customer-managed keys for encryption. You can also set a retention policy to prevent changes to files or deleting files before the time you specify. Once you have created a bucket, you define a lifecycle policy. When you add a rule, you need to specify the object condition and the action. Condition options are Age, Creation Date, Storage Class, Newer Versions, and Live State. Live State applies to version objects, and you can set your condition to apply to either live or archived versions of an object.

Deploying an Application Using Deployment Manager

You can create your own solution configuration files so users can launch preconfigured solutions. Deployment Manager configuration files are written in YAML syntax. The configuration files start with the word resources, followed by resource entities, which are defined using three fields: - name, which is the name of the resource - type, which is the type of resource, such as compute.v1.instance - properties, which are key-value pairs that specify configuration parameters for the resource. For example, a VM has properties to specify machine type, disks, and network interfaces.

Deploying Compute Engine with a Custom Network

You can deploy a VM with custom network configurations using the cloud console and the command line. You can specify a static IP address or choose a custom ephemeral address using the Primary Internal IP setting. The External IP drop-down allows you to have an ephemeral external IP.

Exporting Billing Data

You can export billing data for later analysis or for compliance reasons. Billing data can be exported to either a BigQuery database or a Cloud Storage file. When exporting to a file, you will need to specify a bucket name and report prefix. You have the option of choosing either the CSV or JSON file format.

Kubernetes Autoscaling

You can have Kubernetes automatically add and remove replicas (and pods) depending on need by specifying autoscaling. You can specify a minimum number and maximum number of replicas to run here.

Command for launching a Deployment Manager Template

You can launch a deployment template using the gcloud deployment-manager deployments create command. For example, to deploy the template from the Google documentation, use the following: gcloud deployment-manager deployments create quickstart-deployment --config vm.yaml

Adding an Existing Disk to Boot Disk in VMs

You can make the disk read-only or read/write. You can also indicate if you want the disk deleted when the instance is deleted. Using an existing disk in read-only mode is a good way of replicating reference data across multiple instances of VMs.

When you create a Kubernetes Cluster

You can specify a machine type that otherwise defaults to n1-standard-1 with 1 vCPU and 3.75 GB of memory. These VMs run specialized OS's optimized to run containers. Some of the memory and CPU is reserved for Kubernetes and so is not available to applications running on the node.

Compute Engine Snapshots

You can take a snapshot of a Compute Engine persistent disk to quickly back up the disk so you can recover lost data, transfer contents to a new disk, or make static data available to multiple nodes. Snapshots can be used as an image for other VMs.

Configuring Load Balancers using Cloud Console

You can then configure the backend and frontend. Backends are VMs that will have load distributed to them. You can configure a health check for a backend. In the health check, you specify a name, a protocol and port, and a set of health criteria. When the health check fails, then the server will be considered unhealthy and taken out of the load balancing rotation. When configuring the frontend, you specify a name, subnetwork, and an internal IP configuration. You also specify the port that will have its traffic forwarded to the backend.

Viewing the status of a Kubernetes Cluster

You can view the status of a Kubernetes cluster using either Google Cloud Console or the gcloud commands.

Interacting with Google Cloud Resource

You have three options for interacting with Google Cloud resources: - Using a command-line interface - Using a RESTful interface - Using the Cloud Shell

Creating Firewall Rules Using Cloud Console

You specify a name and description of the firewall rule. You can choose to turn logging on or off. If it is on, logging information will be captured in Stackdriver. You also need to specify the network in the VPC to apply the rule to. You will need to specify a priority, direction, action, targets, and sources. Priority can be integers in the range from 0 to 65535. Direction can be ingress or egress. Action can be allow or deny. Targets are chosen from a drop down list; The target types are: All instances in the network, specified target tags, and specified service account. If you choose tags or service accounts, you will be able to specify the tags or name of the service account. You can also specify source filters as either IP ranges, subnets, source tags, or service accounts. GCP allows a second source filter if you'd like to use a combination of conditions. Source filter types are: IP ranges, Subnets, Source tags, and Service Account You specify protocol and ports by choosing between the Allow All and Specified Protocols and Ports options. If you choose the latter, you can specify protocols and ports.

Virtual Private Cloud (VPC)

Your internal GCP network. Here you specify IP addresses for your VMs and services and define firewall rules to control access to to subnetworks and VMs in your VPC. A VPC can span the globe without relying on the public internet. VPCs can have subnets in any GCP regions world-wide and subnets can span the zones that make up a region. You can have resources in different zones on the same subnet.

Zonal Persistent Disks

Zonal disks store data across multiple physical drives in a single zone. If the zone becomes inaccessible, you will lose access to your disks.

Command to export data from BigQuery

bq extract --destination_format [FORMAT] --compression [COMPRESSION_TYPE] --field_delimiter [DELIMITER] --print_header [BOOLEAN] [PROJECT_ID]:[DATASET] . [TABLE] gs://[BUCKET]/[FILENAME] Here is an example: bq extract --destination_format CSV --compression GZIP 'mydataset.mytable' gs://example-bucket/myfile.zip

Command to load data into BigQuery

bq load --autodetect --source_format=[FORMAT] [DATASET] . [TABLE] [PATH_TO_SOURCE] The --autodetect parameter has bq load automatically detect the table schema from the source file Am example command is as follows: bq load --autodetect --source_format=CSV mydataset.mytable gs://ace-exam-bigquery/mydata.csv

Command for working with Kubernetes Engine

gcloud container The gcloud command has many parameters: --project --zone --machine-type --image-type --disk-type --disk-size --num-node

Command for splitting traffic for App Engine

gcloud ap services set-traffic gcloud app services set-traffic servl --splits v1=.4, v2=.6 This will split traffic with 40% going to version 1 of the service named serv1 and 60% going to version 2. If no service name is specified, then all services are split. --migrate, indicates that App Engine should migrate traffic from the previous version to the new version --split-traffic, specifies how to split traffic using either IP or cookies. Possible values are ip, cookie, and random.

Command to deploy your App Engine app

gcloud app deploy app.yml app.yml is the default, so if you are already using that for the filename, you do not have to specify app.yml The gcloud app deploy command has some optional parameters: --version, to specify a custom version ID --project, to specify the project ID to use for this app --no-promote, to deploy the app without routing traffic to it

Command to stop serving versions

gcloud app versions stop v1 v2 This command will stop serving versions named v1 and v2

Command for installing the Kubernetes command-line tool

gcloud components install kubectl

Command to create a disk from a snapshot

gcloud compute disks create disk_name --source-snapshot = snapshot_name You can also specify the size of the disk and disk type using the --size and --type parameters. gcloud compute disks create disk_name --source-snapshot = snapshot_name --size=100 --type=pd-standard This will create a 100 GB disk using the named snapshot using a standard persistent disk.

Command to create a snapshot of a disk

gcloud compute disks snapshot disk_name --snapshot-names=NAME

Command for creating a custom image

gcloud compute images create image_name The source for the image is specified using one of the source parameters: --source-disk --source-image --source-image-family --source-snapshot --source-uri An image can have description and a set of labels. These are assigned using the --description and --labels parameters.

Command to store an image on Cloud Storage

gcloud compute images export --destination-uri destination_uri --image image_name destination_uri is the address of the Cloud Storage bucket to store the image.

Command for deleting instance groups

gcloud compute instance-groups managed delete name

Command to list instance groups

gcloud compute instance-groups managed list

Command to create an instance group template

gcloud compute instance-templates create instance You can specify an existing VM as the source of the instance template by using the --source-instance parameter. If the parameter is not used, GCP will use n1-standard-1 by default. gcloud compute instance-templates create instance_template_name --source-instance=instance_name

Command for deleting instance templates

gcloud compute instance-templates delete name

Command to list instance templates

gcloud compute instance-templates list

Command to specify a Service Instance

gcloud compute instances create [INSTANCE_NAME] --service-account [SERVICE_ACCOUNT_EMAIL]

Command for creating an instance

gcloud compute instances create ace-instance-1, ace-instance-2 Creates two instances that are respectively named ace-instance-1 and ace-instance-2 If you do not specify parameters such as Zone, Google Cloud will use information from your default project. To create an instance in zone us-central1-a, add the zone parameter like this: gcloud compute instances create ace-instance-1 --zone-us-central1-a

Command for deleting an instance

gcloud compute instances delete ace-exam-1 The delete command takes the --zone parameter to specify where the VM to delete is located. gcloud compute instances delete ace-exam-1 --zone-us-central-1 When an instance is deleted, the disks on the VM may be deleted or saved by using the --delete-disks and --keep-disks parameters You can specify to keep all disks, boot to specify the partition of the root file system, and data to specify nonboot disks. For example, the following command keeps all disks: gcloud compute instances delete INSTANCE_NAMES --zone us-central1-a --keep-disks=all The following command deletes all nonboot disks: gcloud compute instances delete INSTANCE_NAMES --zone us-central1-a --delete-disks=data

Command to see resource fields for a VM

gcloud compute instances describe

Command to list all VM instances created

gcloud compute instances list

Command to view all VMs

gcloud compute instances list To list VMs in a particular zone, you can use the following: gcloud compute instances list --filter="zone:us-central1-a" You can list multiple zones using a comma separated list. --limit, is used to limit the number of VMs listed --sort-by, is used to reorder the list of VMs by specifying a resource field

Command for setting Service Account scopes

gcloud compute instances set-service-account [INSTANCE_NAME] \ [--service-account [SERVICE_ACCOUNT_EMAIL] | [--no-service-account] \ [--no-scopes | --scopes [SCOPES,...]] An example scope assignment using gcloud is as follows: gcloud compute instances set-service-account ace-instance \ --service-account [email protected] \ --scopes compute-rw, storage-ro

Command for starting an instance

gcloud compute instances start ace-exam-1 The instance start command has optional parameters: --async for displaying information about the start operation --verbose in many Linux commands provides similar functionality GCP needs to know in which zone to start an instance, otherwise the command will prompt for one. gcloud compute instances start ace-exam-1 --zone-us-central-1

Command for stopping an instance

gcloud compute instances stop ace-exam-1 GCP needs to know in which zone to start an instance, otherwise the command will prompt for one. gcloud compute instances stop ace-exam-1 --zone-us-central-1

Command for viewing project information

gcloud compute project-info describe

Command to list detailed information about a snapshot

gcloud compute snapshots describe snapshot_name

Command to list all snapshots

gcloud compute snapshots list

Command to describe the details of a cluster

gcloud container clusters describe You will need to pass in the name of a zone or region using the --zone or --region parameter. The describe command also displays authentication information such as client certificate, username, and password.

Command for describing the details of an image

gcloud container images describe

Command for listing images in a Container Registry

gcloud container images list

Command to list names and information of all clusters

gcloud containers clusters list

Command to export data from Cloud Datastore

gcloud datastore export --namespaces="(default)" gs://${BUCKET}

Command to import data to Cloud Datastore

gcloud datastore import gs://${BUCKET}/[PATH]/[FILE] .overall_export_metadata

Command to deploy a Cloud Functions function for Cloud Storage

gcloud functions deploy This command takes the name of a function as its argument. There are also three parameters you will need to pass in. - Runtime: indicates whether you are using Python 3.7 Node.js, or Node.js 8. - Trigger-resource: indicates the bucket name associated with the trigger. - Trigger-event: is the kind of event that will trigger the execution of the function. The possible options are: - google.storage.object.finalize - google.storage.object.delete - google.storage.object.archive - google.storage.object.metadataupdate

Command to view permissions that are assigned to a role

gcloud iam roles describe

Command to assign roles to a member in a project

gcloud projects add-iam-policy-binding [RESOURCE-NAME] --member user:[USER-EMAIL] --role [ROLE-ID]

Command to see a list of users and roles assigned across a project

gcloud projects get-iam-policy For example, to list roles assigned to users in a project with the project ID ace-exam-project, use this: gcloud projects get-iam-policy ace-exam-project

Command to create a Cloud Pub/Sub subscription

gcloud pubsub subscriptions create --topic [TOPIC NAME] [SUBSCRIPTION_NAME]

Command to create a Cloud Pub/Sub topic

gcloud pubsub topics create [TOPIC_NAME]

Command to change access controls on a bucket

gsutil acl ch -u service_account_address:W gs://bucket_name The gsutil acl ch command changes access permissions. The -u parameter specifies the user. The :W option indicates that the user should have write access to the bucket.

Command to upload a file from your local device or a GCP VM

gsutil cp local_object_location gs:// detination_bucket_name

Command to create a Cloud Storage Bucket

gsutil mb gs://bucket_name/ Bucket names must be globally unique

Command to autoscale Kubernetes deployments

kubectl autoscale deployment nginx-1 --max 10 --min 1 --cpu-percent 80 The following command will add or remove pods as needed to meet demand based on CPU utilization. If CPU usage exceeds 80 percent, up to 10 additional pods or replicas will be added. The deployment will always have at least one pod or replica.

Command to remove a Kubernetes deployment

kubectl delete deployment nginx-1

Command to remove a Kubernetes Service

kubectl delete service hello-server

Command to describe nodes in a cluster

kubectl describe nodes

Command to describe pods in a cluster

kubectl describe pods This command also includes information about containers, such as name, labels, conditions, network addresses, and system information.

Command to expose a Kubernetes Service to resources outside of the cluster

kubectl expose deployment hello-server --type="LoadBalancer" This command exposes the services by having a load balancer act as the endpoint for the outside resources to contact the service.

Command to list Kubernetes Deployments

kubectl get deployments

Command to list the nodes in a cluster

kubectl get nodes

Command to list pods in a cluster

kubectl get pods

Command to list Kubernetes Services

kubectl get services

Command to add a Kubernetes Service

kubectl run For example, to add a service called hello-server using the sample application by the same name provided by Google, use the following command: kubectl run hello-server --image=gcr.io/google/samples/hello-app:1.0 --port 8080 This command will download and start running the image found at the path given and it will be accessible on port 8080.

Command for using kubectl to run a Docker image on a cluster

kubectl run ch07-app-deploy --image=ch07-app--port-8080 This will run a Docker image called ch07-app and make its network accessible on port 8080.

Command for scaling up the number of replicas in a deployment

kubectl scale deployment ch07-app-deploy --replicas=5 This will scale a deployment called ch07-app to 5 replicas

Command to scale Kubernetes deployments

kubectl scale deployment nginx-1 --replicas=5 This command will scale a deployment named nginx-1 to 5 replicas.


संबंधित स्टडी सेट्स

Introduction to IOT final exam -Blake Lenzing

View Set

Operating System + Computer applications

View Set

Chapter #6 - Developing an effective business model

View Set

Astronomy Practice Questions & Homework - Chapter 03

View Set

Chapter 39: Fluid, Electrolyte, & Acid-Base Balance

View Set